Sample records for earthquake frequency uncertainties

  1. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  2. Examples of Communicating Uncertainty Applied to Earthquake Hazard and Risk Products

    NASA Astrophysics Data System (ADS)

    Wald, D. J.

    2013-12-01

    When is communicating scientific modeling uncertainty effective? One viewpoint is that the answer depends on whether one is communicating hazard or risk: hazards have quantifiable uncertainties (which, granted, are often ignored), yet risk uncertainties compound uncertainties inherent in the hazard with those of the risk calculations, and are thus often larger. Larger, yet more meaningful: since risk entails societal impact of some form, consumers of such information tend to have a better grasp of the potential uncertainty ranges for loss information than they do for less-tangible hazard values (like magnitude, peak acceleration, or stream flow). I present two examples that compare and contrast communicating uncertainty for earthquake hazard and risk products. The first example is the U.S. Geological Survey's (USGS) ShakeMap system, which portrays the uncertain, best estimate of the distribution and intensity of shaking over the potentially impacted region. The shaking intensity is well constrained at seismograph locations yet is uncertain elsewhere, so shaking uncertainties are quantified and presented spatially. However, with ShakeMap, it seems that users tend to believe what they see is accurate in part because (1) considering the shaking uncertainty complicates the picture, and (2) it would not necessarily alter their decision-making. In contrast, when it comes to making earthquake-response decisions based on uncertain loss estimates, actions tend to be made only after analysis of the confidence in (or source of) such estimates. Uncertain ranges of loss estimates instill tangible images for users, and when such uncertainties become large, intuitive reality-check alarms go off, for example, when the range of losses presented become too wide to be useful. The USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, which in near-real time alerts users to the likelihood of ranges of potential fatalities and economic impact, is aimed at

  3. Uncertainties in Earthquake Loss Analysis: A Case Study From Southern California

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Guin, J.

    2005-12-01

    Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground

  4. Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data

    USGS Publications Warehouse

    Bakun, W.H.; Gomez, Capera A.; Stucchi, M.

    2011-01-01

    Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental

  5. An approach to detect afterslips in giant earthquakes in the normal-mode frequency band

    NASA Astrophysics Data System (ADS)

    Tanimoto, Toshiro; Ji, Chen; Igarashi, Mitsutsugu

    2012-08-01

    An approach to detect afterslips in the source process of giant earthquakes is presented in the normal-mode frequency band (0.3-2.0 mHz). The method is designed to avoid a potential systematic bias problem in the determination of earthquake moment by a typical normal-mode approach. The source of bias is the uncertainties in Q (modal attenuation parameter) which varies by up to about ±10 per cent among published studies. A choice of Q values within this range affects amplitudes in synthetic seismograms significantly if a long time-series of about 5-7 d is used for analysis. We present an alternative time-domain approach that can reduce this problem by focusing on a shorter time span with a length of about 1 d. Application of this technique to four recent giant earthquakes is presented: (1) the Tohoku, Japan, earthquake of 2011 March 11, (2) the 2010 Maule, Chile earthquake, (3) the 2004 Sumatra-Andaman earthquake and (4) the Solomon earthquake of 2007 April 1. The Global Centroid Moment Tensor (GCMT) solution for the Tohoku earthquake explains the normal-mode frequency band quite well. The analysis for the 2010 Chile earthquake indicates that the moment is about 7-10 per cent higher than the moment determined by its GCMT solution but further analysis shows that there is little evidence of afterslip; the deviation in moment can be explained by an increase of the dip angle from 18° in the GCMT solution to 19°. This may be a simple trade-off problem between the moment and dip angle but it may also be due to a deeper centroid in the normal-mode frequency band data, as a deeper source could have steeper dip angle due to changes in geometry of the Benioff zone. For the 2004 Sumatra-Andaman earthquake, the five point-source solution by Tsai et al. explains most of the signals but a sixth point-source with long duration improves the fit to the normal-mode frequency band data. The 2007 Solomon earthquake shows that the high-frequency part of our analysis (above 1 mHz) is

  6. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    USGS Publications Warehouse

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  7. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  8. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  9. Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography

    NASA Astrophysics Data System (ADS)

    Jousset, Philippe; Neuberg, Jürgen; Jolly, Arthur

    2004-11-01

    Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on magma properties and rheology and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2-D finite-difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a homogeneous viscoelastic medium with topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid (SLS) for seismic frequencies above 2 Hz. Results demonstrate that attenuation modifies both amplitudes and dispersive characteristics of low-frequency earthquakes. Low frequency volcanic earthquakes are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of the seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.

  10. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  11. Frequency-Dependent Tidal Triggering of Low Frequency Earthquakes Near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Xue, L.; Burgmann, R.; Shelly, D. R.

    2017-12-01

    The effect of small periodic stress perturbations on earthquake generation is not clear, however, the rate of low-frequency earthquakes (LFEs) near Parkfield, California has been found to be strongly correlated with solid earth tides. Laboratory experiments and theoretical analyses show that the period of imposed forcing and source properties affect the sensitivity to triggering and the phase relation of the peak seismicity rate and the periodic stress, but frequency-dependent triggering has not been quantitatively explored in the field. Tidal forcing acts over a wide range of frequencies, therefore the sensitivity to tidal triggering of LFEs provides a good probe to the physical mechanisms affecting earthquake generation. In this study, we consider the tidal triggering of LFEs near Parkfield, California since 2001. We find the LFEs rate is correlated with tidal shear stress, normal stress rate and shear stress rate. The occurrence of LFEs can also be independently modulated by groups of tidal constituents at semi-diurnal, diurnal and fortnightly frequencies. The strength of the response of LFEs to the different tidal constituents varies between LFE families. Each LFE family has an optimal triggering frequency, which does not appear to be depth dependent or systematically related to other known properties. This suggests the period of the applied forcing plays an important role in the triggering process, and the interaction of periods of loading history and source region properties, such as friction, effective normal stress and pore fluid pressure, produces the observed frequency-dependent tidal triggering of LFEs.

  12. An energy dependent earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    Spassiani, I.; Marzocchi, W.

    2017-12-01

    The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.

  13. Slow earthquakes in microseism frequency band (0.1-2 Hz) off the Kii peninsula

    NASA Astrophysics Data System (ADS)

    Kaneko, L.; Ide, S.; Nakano, M.

    2017-12-01

    of in wave radiation at different frequencies. Although the location errors are not always small enough to confirm the collocation of sources, due to uncertainty in structure, we can confirm seismic wave are radiated in the microseism band from slow earthquake, which is considered as a continuous, broadband, and complicated phenomenon.

  14. Tidal controls on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, S.; Yabe, S.; Tanaka, Y.

    2016-12-01

    The possibility that tidal stresses can trigger earthquakes is a long-standing issue in seismology. Except in some special cases, a causal relationship between seismicity and the phase of tidal stress has been rejected on the basis of studies using many small events. However, recently discovered deep tectonic tremors are highly sensitive to tidal stress levels, with the relationship being governed by a nonlinear law according to which the tremor rate increases exponentially with increasing stress; thus, slow deformation (and the probability of earthquakes) may be enhanced during periods of large tidal stress. Here, we show the influence of tidal stress on seismicity by calculating histories of tidal shear stress during the 2-week period before earthquakes. Very large earthquakes tend to occur near the time of maximum tidal stress, but this tendency is not obvious for small earthquakes. Rather, we found that tidal stress controls the earthquake size-frequency statistics; i.e., the fraction of large events increases (i.e. the b-value of the Gutenberg-Richter relation decreases) as the tidal shear stress increases. This correlation is apparent in data from the global catalog and in relatively homogeneous regional catalogues of earthquakes in Japan. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. Our findings indicate that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. This finding has clear implications for probabilistic earthquake forecasting.

  15. Modelling low-frequency volcanic earthquakes in a viscoelastic medium with topography

    NASA Astrophysics Data System (ADS)

    Jousset, P.; Neuberg, J.

    2003-04-01

    Magma properties are fundamental to explain the volcanic eruption style as well as the generation and propagation of seismic waves. This study focusses on rheological magma properties and their impact on low-frequency volcanic earthquakes. We investigate the effects of anelasticity and topography on the amplitudes and spectra of synthetic low-frequency earthquakes. Using a 2D finite difference scheme, we model the propagation of seismic energy initiated in a fluid-filled conduit embedded in a 2D homogeneous viscoelastic medium with topography. Topography is introduced by using a mapping procedure that stretches the computational rectangular grid into a grid which follows the topography. We model intrinsic attenuation by linear viscoelastic theory and we show that volcanic media can be approximated by a standard linear solid for seismic frequencies (i.e., above 2 Hz). Results demonstrate that attenuation modifies both amplitude and dispersive characteristics of low-frequency earthquakes. Low-frequency events are dispersive by nature; however, if attenuation is introduced, their dispersion characteristics will be altered. The topography modifies the amplitudes, depending on the position of seismographs at the surface. This study shows that we need to take into account attenuation and topography to interpret correctly observed low-frequency volcanic earthquakes. It also suggests that the rheological properties of magmas may be constrained by the analysis of low-frequency seismograms.

  16. 3-D P- and S-wave velocity structure and low-frequency earthquake locations in the Parkfield, California region

    USGS Publications Warehouse

    Zeng, Xiangfang; Thurber, Clifford H.; Shelly, David R.; Harrington, Rebecca M.; Cochran, Elizabeth S.; Bennington, Ninfa L.; Peterson, Dana; Guo, Bin; McClement, Kara

    2016-01-01

    To refine the 3-D seismic velocity model in the greater Parkfield, California region, a new data set including regular earthquakes, shots, quarry blasts and low-frequency earthquakes (LFEs) was assembled. Hundreds of traces of each LFE family at two temporary arrays were stacked with time–frequency domain phase weighted stacking method to improve signal-to-noise ratio. We extend our model resolution to lower crustal depth with LFE data. Our result images not only previously identified features but also low velocity zones (LVZs) in the area around the LFEs and the lower crust beneath the southern Rinconada Fault. The former LVZ is consistent with high fluid pressure that can account for several aspects of LFE behaviour. The latter LVZ is consistent with a high conductivity zone in magnetotelluric studies. A new Vs model was developed with S picks that were obtained with a new autopicker. At shallow depth, the low Vs areas underlie the strongest shaking areas in the 2004 Parkfield earthquake. We relocate LFE families and analyse the location uncertainties with the NonLinLoc and tomoDD codes. The two methods yield similar results.

  17. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  18. Frequency characteristics and far-field effect of gravity perturbation before earthquake

    NASA Astrophysics Data System (ADS)

    Qiang, Jian-Ke; Lu, Kai; Zhang, Qian-Jiang; Man, Kai-Feng; Li, Jun-Ying; Mao, Xian-Cheng; Lai, Jian-Qing

    2017-03-01

    We used high-pass filtering and the Fourier transform to analyze tidal gravity data prior to five earthquakes from four superconducting gravity stations around the world. A stable gravitational perturbation signal is received within a few days before the earthquakes. The gravitational perturbation signal before the Wenchuan earthquake on May 12, 2008 has main frequency of 0.1-0.3 Hz, and the other four have frequency bands of 0.12-0.17 Hz and 0.06-0.085 Hz. For earthquakes in continental and oceanic plate fault zones, gravity anomalies often appear on the superconducting gravimeters away from the epicenter, whereas the stations near the epicenter record small or no anomalies. The results suggest that this kind of gravitational perturbation signals correlate with earthquake occurrence, making them potentially useful earthquake predictors. The far-field effect of the gravitational perturbation signals may reveal the interaction mechanisms of the Earth's tectonic plates. However, owing to the uneven distribution of gravity tide stations, the results need to be further confirmed in the future.

  19. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  20. High-frequency seismic signals associated with glacial earthquakes in Greenland

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2017-12-01

    Glacial earthquakes are magnitude 5 seismic events generated by iceberg calving at marine-terminating glaciers. They are characterized by teleseismically detectable signals at 35-150 seconds period that arise from the rotation and capsize of gigaton-sized icebergs (e.g., Ekström et al., 2003; Murray et al., 2015). Questions persist regarding the details of this calving process, including whether there are characteristic precursory events such as ice slumps or pervasive crevasse opening before an iceberg rotates away from the glacier. We investigate the high-frequency seismic signals produced before, during, and after glacial earthquakes. We analyze a set of 94 glacial earthquakes that occurred at three of Greenland's major glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier, from 2001 - 2013. We employ data from the GLISN network of broadband seismometers around Greenland and from short-term seismic deployments located close to the glaciers. These data are bandpass filtered to 3 - 10 Hz and trimmed to one-hour windows surrounding known glacial earthquakes. We observe elevated amplitudes of the 3 - 10 Hz signal for 500 - 1500 seconds spanning the time of each glacial earthquake. These durations are long compared to the 60 second glacial-earthquake source. In the majority of cases we observe an increase in the amplitude of the 3 - 10 Hz signal 200 - 600 seconds before the centroid time of the glacial earthquake and sustained high amplitudes for up to 800 seconds after. In some cases, high-amplitude energy in the 3 - 10 Hz band precedes elevated amplitudes in the 35 - 150 s band by 300 seconds. We explore possible causes for these high-frequency signals, and discuss implications for improving understanding of the glacial-earthquake source.

  1. Constraints on the source parameters of low-frequency earthquakes on the San Andreas Fault

    USGS Publications Warehouse

    Thomas, Amanda M.; Beroza, Gregory C.; Shelly, David R.

    2016-01-01

    Low-frequency earthquakes (LFEs) are small repeating earthquakes that occur in conjunction with deep slow slip. Like typical earthquakes, LFEs are thought to represent shear slip on crustal faults, but when compared to earthquakes of the same magnitude, LFEs are depleted in high-frequency content and have lower corner frequencies, implying longer duration. Here we exploit this difference to estimate the duration of LFEs on the deep San Andreas Fault (SAF). We find that the M ~ 1 LFEs have typical durations of ~0.2 s. Using the annual slip rate of the deep SAF and the average number of LFEs per year, we estimate average LFE slip rates of ~0.24 mm/s. When combined with the LFE magnitude, this number implies a stress drop of ~104 Pa, 2 to 3 orders of magnitude lower than ordinary earthquakes, and a rupture velocity of 0.7 km/s, 20% of the shear wave speed. Typical earthquakes are thought to have rupture velocities of ~80–90% of the shear wave speed. Together, the slow rupture velocity, low stress drops, and slow slip velocity explain why LFEs are depleted in high-frequency content relative to ordinary earthquakes and suggest that LFE sources represent areas capable of relatively higher slip speed in deep fault zones. Additionally, changes in rheology may not be required to explain both LFEs and slow slip; the same process that governs the slip speed during slow earthquakes may also limit the rupture velocity of LFEs.

  2. Monitoring of ULF (ultra-low-frequency) Geomagnetic Variations Associated with Earthquakes

    PubMed Central

    Hayakawa, Masashi; Hattori, Katsumi; Ohta, Kenji

    2007-01-01

    ULF (ultra-low-frequency) electromagnetic emission is recently recognized as one of the most promising candidates for short-term earthquake prediction. This paper reviews previous convincing evidence on the presence of ULF emissions before a few large earthquakes. Then, we present our network of ULF monitoring in the Tokyo area by describing our ULF magnetic sensors and we finally present a few, latest results on seismogenic electromagnetic emissions for recent large earthquakes with the use of sophisticated signal processings.

  3. Reading a 400,000-year record of earthquake frequency for an intraplate fault

    NASA Astrophysics Data System (ADS)

    Williams, Randolph T.; Goodwin, Laurel B.; Sharp, Warren D.; Mozley, Peter S.

    2017-05-01

    Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ˜550 ka—the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ˜430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO2, suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.

  4. The origin of high frequency radiation in earthquakes and the geometry of faulting

    NASA Astrophysics Data System (ADS)

    Madariaga, R.

    2004-12-01

    In a seminal paper of 1967 Kei Aki discovered the scaling law of earthquake spectra and showed that, among other things, the high frequency decay was of type omega-squared. This implies that high frequency displacement amplitudes are proportional to a characteristic length of the fault, and radiated energy scales with the cube of the fault dimension, just like seismic moment. Later in the seventies, it was found out that a simple explanation for this frequency dependence of spectra was that high frequencies were generated by stopping phases, waves emitted by changes in speed of the rupture front as it propagates along the fault, but this did not explain the scaling of high frequency waves with fault length. Earthquake energy balance is such that, ignoring attenuation, radiated energy is the change in strain energy minus energy released for overcoming friction. Until recently the latter was considered to be a material property that did not scale with fault size. Yet, in another classical paper Aki and Das estimated in the late 70s that energy release rate also scaled with earthquake size, because earthquakes were often stopped by barriers or changed rupture speed at them. This observation was independently confirmed in the late 90s by Ide and Takeo and Olsen et al who found that energy release rates for Kobe and Landers were in the order of a MJ/m2, implying that Gc necessarily scales with earthquake size, because if this was a material property, small earthquakes would never occur. Using both simple analytical and numerical models developed by Addia-Bedia and Aochi and Madariaga, we examine the consequence of these observations for the scaling of high frequency waves with fault size. We demonstrate using some classical results by Kostrov, Husseiny and Freund that high frequency energy flow measures energy release rate and is generated when ruptures change velocity (both direction and speed) at fault kinks or jogs. Our results explain why super shear ruptures are

  5. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  6. First uncertainty evaluation of the FoCS-2 primary frequency standard

    NASA Astrophysics Data System (ADS)

    Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L. G.; Schenker, D.; Thomann, P.; Südmeyer, T.

    2018-06-01

    We report the uncertainty evaluation of the Swiss continuous primary frequency standard FoCS-2 (Fontaine Continue Suisse). Unlike other primary frequency standards which are working with clouds of cold atoms, this fountain uses a continuous beam of cold caesium atoms bringing a series of metrological advantages and specific techniques for the evaluation of the uncertainty budget. Recent improvements of FoCS-2 have made possible the evaluation of the frequency shifts and of their uncertainties in the order of . When operating in an optimal regime a relative frequency instability of is obtained. The relative standard uncertainty reported in this article, , is strongly dominated by the statistics of the frequency measurements.

  7. Repeating Deep Very Low Frequency Earthquakes: An Evidence of Transition Zone between Brittle and Ductile Zone along Plate Boundary

    NASA Astrophysics Data System (ADS)

    Ishihara, Y.; Yamamoto, Y.; Arai, R.

    2017-12-01

    Recently slow or low frequency seismic and geodetic events are focused under recognition of important role in tectonic process. The most western region of Ryukyu trench, Yaeyama Islands, is very active area of these type events. It has semiannual-like slow slip (Heki et.al., 2008; Nishimura et.al.,2014) and very frequent shallow very low frequency earthquakes near trench zone (Ando et.al.,2012; Nakamura et.al.,2014). Arai et.al.(2016) identified clear reverse phase discontinuity along plate boundary by air-gun survey, suggesting existence of low velocity layer including fluid. The subducting fluid layer is considered to control slip characteristics. On the other hand, deep low frequency earthquake and tremor observed at south-western Honshu and Shikoku of Japan are not identified well due to lack of high-quality seismic network. A broadband seismic station(ISG/PS) of Pacific21 network is operating in last 20 years that locates on occurrence potential area of low frequency earthquake. We tried to review continuous broadband record, searching low frequency earthquakes. In pilot survey, we found three very low frequency seismic events which are dominant in less than 0.1Hz component and are not listed in earthquake catalogue. Source locates about 50km depth and at transition area between slow slip event and active area of general earthquake along plate boundary. To detect small and/or hidden very low frequency earthquake, we applied matched filter analysis to continuous three components waveform data using pre-reviewed seismogram as template signal. 12 events with high correlation are picked up in last 10 years. Most events have very similar waveform, which means characteristics of repeating deep very low frequency earthquake. The event history of very low frequency earthquake is not related with one of slow slip event in this region. In Yaeyama region, low frequency earthquake, general earthquake and slow slip event occur dividing in space and have apparent

  8. Stability and uncertainty of finite-fault slip inversions: Application to the 2004 Parkfield, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.; Ji, C.; Larson, K.M.

    2007-01-01

    The 2004 Parkfield, California, earthquake is used to investigate stability and uncertainty aspects of the finite-fault slip inversion problem with different a priori model assumptions. We utilize records from 54 strong ground motion stations and 13 continuous, 1-Hz sampled, geodetic instruments. Two inversion procedures are compared: a linear least-squares subfault-based methodology and a nonlinear global search algorithm. These two methods encompass a wide range of the different approaches that have been used to solve the finite-fault slip inversion problem. For the Parkfield earthquake and the inversion of velocity or displacement waveforms, near-surface related site response (top 100 m, frequencies above 1 Hz) is shown to not significantly affect the solution. Results are also insensitive to selection of slip rate functions with similar duration and to subfault size if proper stabilizing constraints are used. The linear and nonlinear formulations yield consistent results when the same limitations in model parameters are in place and the same inversion norm is used. However, the solution is sensitive to the choice of inversion norm, the bounds on model parameters, such as rake and rupture velocity, and the size of the model fault plane. The geodetic data set for Parkfield gives a slip distribution different from that of the strong-motion data, which may be due to the spatial limitation of the geodetic stations and the bandlimited nature of the strong-motion data. Cross validation and the bootstrap method are used to set limits on the upper bound for rupture velocity and to derive mean slip models and standard deviations in model parameters. This analysis shows that slip on the northwestern half of the Parkfield rupture plane from the inversion of strong-motion data is model dependent and has a greater uncertainty than slip near the hypocenter.

  9. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    USGS Publications Warehouse

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  10. Future of Earthquake Early Warning: Quantifying Uncertainty and Making Fast Automated Decisions for Applications

    NASA Astrophysics Data System (ADS)

    Wu, Stephen

    can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

  11. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    NASA Astrophysics Data System (ADS)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  12. Frequency-Dependent Rupture Processes for the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Miyake, H.

    2012-12-01

    The 2011 Tohoku earthquake is characterized by frequency-dependent rupture process [e.g., Ide et al., 2011; Wang and Mori, 2011; Yao et al., 2011]. For understanding rupture dynamics of this earthquake, it is extremely important to investigate wave-based source inversions for various frequency bands. The above frequency-dependent characteristics have been derived from teleseismic analyses. This study challenges to infer frequency-dependent rupture processes from strong motion waveforms of K-NET and KiK-net stations. The observations suggested three or more S-wave phases, and ground velocities at several near-source stations showed different arrivals of their long- and short-period components. We performed complex source spectral inversions with frequency-dependent phase weighting developed by Miyake et al. [2002]. The technique idealizes both the coherent and stochastic summation of waveforms using empirical Green's functions. Due to the limitation of signal-to-noise ratio of the empirical Green's functions, the analyzed frequency bands were set within 0.05-10 Hz. We assumed a fault plane with 480 km in length by 180 km in width with a single time window for rupture following Koketsu et al. [2011] and Asano and Iwata [2012]. The inversion revealed source ruptures expanding from the hypocenter, and generated sharp slip-velocity intensities at the down-dip edge. In addition to test the effects of empirical/hybrid Green's functions and with/without rupture front constraints on the inverted solutions, we will discuss distributions of slip-velocity intensity and a progression of wave generation with increasing frequency.

  13. Uncertainties for seismic moment tensors and applications to nuclear explosions, volcanic events, and earthquakes

    NASA Astrophysics Data System (ADS)

    Tape, C.; Alvizuri, C. R.; Silwal, V.; Tape, W.

    2017-12-01

    When considered as a point source, a seismic source can be characterized in terms of its origin time, hypocenter, moment tensor, and source time function. The seismologist's task is to estimate these parameters--and their uncertainties--from three-component ground motion recorded at irregularly spaced stations. We will focus on one portion of this problem: the estimation of the moment tensor and its uncertainties. With magnitude estimated separately, we are left with five parameters describing the normalized moment tensor. A lune of normalized eigenvalue triples can be used to visualize the two parameters (lune longitude and lune latitude) describing the source type, while the conventional strike, dip, and rake angles can be used to characterize the orientation. Slight modifications of these five parameters lead to a uniform parameterization of moment tensors--uniform in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. For a moment tensor m that we have inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in our inference of m. The calculation of P(V) requires knowing both the probability P(w) and the fractional volume V(w) of the set of moment tensors within a given angular radius w of m. We apply this approach to several different data sets, including nuclear explosions from the Nevada Test Site, volcanic events from Uturuncu (Bolivia), and earthquakes. Several challenges remain: choosing an appropriate misfit function, handling time shifts between data and synthetic waveforms, and extending the uncertainty estimation to include more source parameters (e.g., hypocenter and source time function).

  14. Very low frequency earthquakes in Tohoku-Oki recorded by short-period ocean bottom seismographs

    NASA Astrophysics Data System (ADS)

    Takahashi, H.; Hino, R.; Ohta, Y.; Uchida, N.; Suzuki, S.; Shinohara, M.; Nakatani, Y.; Matsuzawa, T.

    2017-12-01

    Various kind of slow earthquakes have been found along many plate boundary zones in the world (Obara, and Kato, 2016). In the Tohoku subduction zone where slow event activities have been considered insignificant, slow slip events associated with low frequency tremors were identified prior to the 2011 Tohoku-Oki earthquake based on seafloor geodetic and seismographical observations. Recently very low frequency earthquakes (VLFEs) have been discovered by inspecting onshore broad-band seismograms. Although the activity of the detected VLFEs is low and the VLFEs occurred in the limited area, VLFEs tends to occur successively in a short time period. In this study, we try to characterize the VLFEs along the Japan Trench based on the seismograms obtained by the instruments deployed near the estimated epicenters.Temporary seismic observations using Ocean Bottom Seismometers (OBSs) have been carried out several times after the 2011 Tohoku-Oki earthquake, and several VLFE activities were observed during the deployments of the OBSs. Amplitudes of horizontal component seismograms of the OBSs grow shortly after the estimated origin times of the VLFEs identified by the onshore seismograms, even though the sensors are 4.5 Hz geophones. It is difficult to recognize evident onsets of P or S waves, correspondence between order of arrivals of discernible wave packets and their amplitudes suggests that these wave packets are seismic signals radiated from the VLFE sources. The OBSs detect regular local earthquakes of the similar magnitudes as the VLFEs. Signal powers of the possible VLFE seismograms are comparable to the regular earthquakes in the frequency range < 1 Hz, while significant deficiency of higher frequency components are observed.

  15. Very low frequency earthquakes along the Ryukyu subduction zone

    NASA Astrophysics Data System (ADS)

    Ando, Masataka; Tu, Yoko; Kumagai, Hiroyuki; Yamanaka, Yoshiko; Lin, Cheng-Horng

    2012-02-01

    A total of 1314 very low frequency earthquakes (VLFEs) were identified along the Ryukyu trench from seismograms recorded at broadband networks in Japan (F-net) and Taiwan (BATS) in 2007. The spectra of typical VLFEs have peak frequencies between 0.02 to 0.1 Hz. Among those, waveforms from 120 VLFEs were inverted to obtain their centoroid moment tensor (CMT) solutions and locations using an examination grid to minimize a residual between the observed and synthetic waveforms within an area of 11° × 14° in latitude and longitude and at depths of 0 to 60 km. Most of the VLFEs occur on shallow thrust faults that are distributed along the Ryukyu trench, which are similar to those earthquakes found in Honshu and Hokkaido, Japan. The locations and mechanisms of VLFEs may be indicative of coupled regions within the accretionary prism or at the plate interface; this study highlights the need for further investigation of the Ryukyu trench to identify coupled regions within it.

  16. Modelling the time-dependent frequency content of low-frequency volcanic earthquakes

    NASA Astrophysics Data System (ADS)

    Jousset, Philippe; Neuberg, Jürgen; Sturton, Susan

    2003-11-01

    Low-frequency volcanic earthquakes and tremor have been observed on seismic networks at a number of volcanoes, including Soufrière Hills volcano on Montserrat. Single events have well known characteristics, including a long duration (several seconds) and harmonic spectral peaks (0.2-5 Hz). They are commonly observed in swarms, and can be highly repetitive both in waveforms and amplitude spectra. As the time delay between them decreases, they merge into tremor, often preceding critical volcanic events like dome collapses or explosions. Observed amplitude spectrograms of long-period volcanic earthquake swarms may display gliding lines which reflect a time dependence in the frequency content. Using a magma-filled dyke embedded in a solid homogeneous half-space as a simplified volcanic structure, we employ a 2D finite-difference method to compute the propagation of seismic waves in the conduit and its vicinity. We successfully replicate the seismic wave field of a single low-frequency event, as well as the occurrence of events in swarms, their highly repetitive characteristics, and the time dependence of their spectral content. We use our model to demonstrate that there are two modes of conduit resonance, leading to two types of interface waves which are recorded at the free surface as surface waves. We also demonstrate that reflections from the top and the bottom of a conduit act as secondary sources that are recorded at the surface as repetitive low-frequency events with similar waveforms. We further expand our modelling to account for gradients in physical properties across the magma-solid interface. We also expand it to account for time dependence of magma properties, which we implement by changing physical properties within the conduit during numerical computation of wave propagation. We use our expanded model to investigate the amplitude and time scales required for modelling gliding lines, and show that changes in magma properties, particularly changes in the

  17. Evaluating sources of uncertainties in finite-fault source models: lessons from the 2009 Mw6.1 L'Aquila earthquake, Italy

    NASA Astrophysics Data System (ADS)

    Ragon, T.; Sladen, A.; Bletery, Q.; Simons, M.; Magnoni, F.; Avallone, A.; Cavalié, O.; Vergnolle, M.

    2016-12-01

    Despite the diversity of available data for the Mw 6.1 2009 earthquake in L'Aquila, Italy, published finite fault slip models are surprisingly different. For instance, the amplitude of the maximum coseismic slip patch varies from 80cm to 225cm, and its depth oscillates between 5 and 15km. Discrepancies between proposed source parameters are believed to result from three sources: observational uncertainties, epistemic uncertainties, and the inherent non-uniqueness of inverse problems. We explore the whole solution space of fault-slip models compatible with the data within the range of both observational and epistemic uncertainties by performing a fully Bayesian analysis. In this initial stage, we restrict our analysis to the static problem.In terms of observation uncertainty, we must take into account the difference in time span associated with the different data types: InSAR images provide excellent spatial coverage but usually correspond to a period of a few days to weeks after the mainshock and can thus be potentially biased by significant afterslip. Continuous GPS stations do not have the same shortcoming, but in contrast do not have the desired spatial coverage near the fault. In the case of the L'Aquila earthquake, InSAR images include a minimum of 6 days of afterslip. Here, we explicitly account for these different time windows in the inversion by jointly inverting for coseismic and post-seismic fault slip. Regarding epistemic or modeling uncertainties, we focus on the impact of uncertain fault geometry and elastic structure. Modeling errors, which result from inaccurate model predictions and are generally neglected, are estimated for both earth model and fault geometry as non-diagonal covariance matrices. The L'Aquila earthquake is particularly suited to investigation of these effects given the availability of a detailed aftershock catalog and 3D velocity models. This work aims at improving our knowledge of the L'Aquila earthquake as well as at providing a

  18. Insights in Low Frequency Earthquake Source Processes from Observations of Their Size-Duration Scaling

    NASA Astrophysics Data System (ADS)

    Farge, G.; Shapiro, N.; Frank, W.; Mercury, N.; Vilotte, J. P.

    2017-12-01

    Low frequency earthquakes (LFE) are detected in association with volcanic and tectonic tremor signals as impulsive, repeated, low frequency (1-5 Hz) events originating from localized sources. While the mechanism causing this depletion of the high frequency content of their signal is still unknown, this feature may indicate that the source processes at the origin of LFE are different from those for regular earthquakes. Tectonic LFE are often associated with slip instabilities in the brittle-ductile transition zones of active faults and volcanic LFE with fluid transport in magmatic and hydrothermal systems. Key constraints on the LFE-generating physical mechanisms can be obtained by establishing scaling laws between their sizes and durations. We apply a simple spectral analysis method to the S-waveforms of each LFE to retrieve its seismic moment and corner frequency. The former characterizes the earthquake's size while the latter is inversely proportional to its duration. First, we analyze a selection of tectonic LFE from the Mexican "Sweet Spot" (Guerrero, Mexico). We find characteristic values of M ˜ 1013 N.m (Mw ˜ 2.6) and fc ˜ 2 Hz. The moment-corner frequency distribution compared to values reported in previous studies in tectonic contexts is consistent with the scaling law suggested by Bostock et al. (2015): fc ˜ M-1/10 . We then apply the same source- parameters determination method to deep volcanic LFE detected in the Klyuchevskoy volcanic group in Kamtchatka, Russia. While the seismic moments for these earthquakes are slightly smaller, they still approximately follow the fc ˜ M-1/10 scaling. This size-duration scaling observed for LFE is very different from the one established for regular earthquakes (fc ˜ M-1/3) and from the scaling more recently suggested by Ide et al. (2007) for the broad class of "slow earthquakes". The scaling observed for LFE suggests that they are generated by sources of nearly constant size with strongly varying intensities

  19. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  20. Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle

    NASA Astrophysics Data System (ADS)

    Oppenheim, Jacob N.; Magnasco, Marcelo O.

    2013-01-01

    The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.

  1. Efficient Location Uncertainty Treatment for Probabilistic Modelling of Portfolio Loss from Earthquake Events

    NASA Astrophysics Data System (ADS)

    Scheingraber, Christoph; Käser, Martin; Allmann, Alexander

    2017-04-01

    Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.

  2. Depth dependence of earthquake frequency-magnitude distributions in California: Implications for rupture initiation

    USGS Publications Warehouse

    Mori, J.; Abercrombie, R.E.

    1997-01-01

    Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.

  3. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  4. Slow Earthquakes in the Microseism Frequency Band (0.1-1.0 Hz) off Kii Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Kaneko, Lisa; Ide, Satoshi; Nakano, Masaru

    2018-03-01

    It is difficult to detect the signal of slow deformation in the 0.1-1.0 Hz frequency band between tectonic tremors and very low frequency events, where microseism noise is dominant. Here we provide the first evidence of slow earthquakes in this microseism band, observed by the DONET1 ocean bottom seismometer network, after an Mw 5.8 earthquake off Kii Peninsula, Japan, on 1 April 2016. The signals in the microseism band were accompanied by signals from active tremors, very low frequency events, and slow slip events that radiated from the shallow plate interface. We report the detection and locations of events across five frequency bands, including the microseism band. The locations and timing of the events estimated in the different frequency bands are similar, suggesting that these signals radiated from a common source. The observed variations in detectability for each band highlight the complexity of the slow earthquake process.

  5. Low-frequency earthquakes in Shikoku, Japan, and their relationship to episodic tremor and slip.

    PubMed

    Shelly, David R; Beroza, Gregory C; Ide, Satoshi; Nakamula, Sho

    2006-07-13

    Non-volcanic seismic tremor was discovered in the Nankai trough subduction zone in southwest Japan and subsequently identified in the Cascadia subduction zone. In both locations, tremor is observed to coincide temporally with large, slow slip events on the plate interface downdip of the seismogenic zone. The relationship between tremor and aseismic slip remains uncertain, however, largely owing to difficulty in constraining the source depth of tremor. In southwest Japan, a high quality borehole seismic network allows identification of coherent S-wave (and sometimes P-wave) arrivals within the tremor, whose sources are classified as low-frequency earthquakes. As low-frequency earthquakes comprise at least a portion of tremor, understanding their mechanism is critical to understanding tremor as a whole. Here, we provide strong evidence that these earthquakes occur on the plate interface, coincident with the inferred zone of slow slip. The locations and characteristics of these events suggest that they are generated by shear slip during otherwise aseismic transients, rather than by fluid flow. High pore-fluid pressure in the immediate vicinity, as implied by our estimates of seismic P- and S-wave speeds, may act to promote this transient mode of failure. Low-frequency earthquakes could potentially contribute to seismic hazard forecasting by providing a new means to monitor slow slip at depth.

  6. Imaging the Subduction Plate Interface Using Low-Frequency Earthquakes

    NASA Astrophysics Data System (ADS)

    Plourde, A. P.; Bostock, M. G.

    2015-12-01

    Low-frequency Earthquakes (LFEs) in subduction zones are commonly thought to represent slip on the plate interface. They have also been observed to lie near or within a zone of low shear-wave velocity, which is modelled as fluid-rich upper oceanic crust. Due to relatively large depth uncertainties in absolute hypocenters of most LFE families, their location relative to an independently imaged subucting plate and, consequently, the nature of the plate boundary at depths between 30-45 km have not been precisely determined. For a selection of LFE families in northern Washington, we measure variations in arrival time of individual LFE detections using multi-channel cross-correlation incorporating both arrivals at the same station and different events (cross-detection data), and the same event but different stations (cross-station data). Employing HypoDD, these times are used to generate relative locations for individual LFE detections. After creating templates from spatial subgroups of detections, network cross-correlation techniques will be used to search for new detections in neighbouring areas, thereby expanding the local catalogue and enabling further subdivision. By combining the source ``arrays'' and the receiver arrays from the Array of Arrays experiment we plan to interrogate plate boundary structure using migration of scattered waves from the subduction complex as previously documented beneath southern Vancouver Island.

  7. What to Expect from the Virtual Seismologist: Delay Times and Uncertainties of Initial Earthquake Alerts in California

    NASA Astrophysics Data System (ADS)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Racine, R.; Meier, M.; Cauzzi, C.

    2013-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland, western Greece and Istanbul. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, Romania, and Iceland are planned or underway. The possible use cases for an EEW system will be determined by the speed and reliability of earthquake source parameter estimates. A thorough understanding of both is therefore essential to evaluate the usefulness of VS. For California, we present state-wide theoretical alert times for hypothetical earthquakes by analyzing time delays introduced by the different components in the VS EEW system. Taking advantage of the fully probabilistic formulation of the VS algorithm we further present an improved way to describe the uncertainties of every magnitude estimate by evaluating the width and shape of the probability density function that describes the relationship between waveform envelope amplitudes and magnitude. We evaluate these new uncertainty values for past seismicity in California through off-line playbacks and compare them to the previously defined static definitions of uncertainty based on real-time detections. Our results indicate where VS alerts are most useful in California and also suggest where most effective improvements to the VS EEW system

  8. Characteristics of dilatational infrasonic pulses accompanying low-frequency earthquakes at Miyakejima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yoshiaki; Yamasato, Hitoshi; Shimbori, Toshiki; Sakai, Takayuki

    2014-12-01

    Since the caldera-forming eruption of Miyakejima Volcano in 2000, low-frequency (LF) earthquakes have occurred frequently beneath the caldera. Some of these LF earthquakes are accompanied by emergent infrasonic pulses that start with dilatational phases and may be accompanied by the eruption of small amounts of ash. The estimated source locations of both the LF earthquakes and the infrasonic signals are within the vent at shallow depth. Moreover, the maximum seismic amplitude roughly correlates with the maximum amplitude of the infrasonic pulses. From these observations, we hypothesized that the infrasonic waves were excited by partial subsidence within the vent associated with the LF earthquakes. To verify our hypothesis, we used the infrasonic data to estimate the volumetric change due to the partial subsidence associated with each LF earthquake. The results showed that partial subsidence in the vent can well explain the generation of infrasonic waves.

  9. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  10. A refined Frequency Domain Decomposition tool for structural modal monitoring in earthquake engineering

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2017-07-01

    Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.

  11. Frequency-area distribution of earthquake-induced landslides

    NASA Astrophysics Data System (ADS)

    Tanyas, H.; Allstadt, K.; Westen, C. J. V.

    2016-12-01

    Discovering the physical explanations behind the power-law distribution of landslides can provide valuable information to quantify triggered landslide events and as a consequence to understand the relation between landslide causes and impacts in terms of environmental settings of landslide affected area. In previous studies, the probability of landslide size was utilized for this quantification and the developed parameter was called a landslide magnitude (mL). The frequency-area distributions (FADs) of several landslide inventories were modelled and theoretical curves were established to identify the mL for any landslide inventory. In the observed landslide inventories, a divergence from the power-law distribution was recognized for the small landslides, referred to as the rollover, and this feature was taken into account in the established model. However, these analyses are based on a relatively limited number of inventories, each with a different triggering mechanism. Existing definition of the mL include some subjectivity, since it is based on a visual comparison between the theoretical curves and the FAD of the medium and large landslides. Additionally, the existed definition of mL introduces uncertainty due to the ambiguity in both the physical explanation of the rollover and its functional form. Here we focus on earthquake-induced landslides (EQIL) and aim to provide a rigorous method to estimate the mL and total landslide area of EQIL. We have gathered 36 EQIL inventories from around the globe. Using these inventories, we have evaluated existing explanations of the rollover and proposed an alternative explanation given the new data. Next, we propose a method to define the EQIL FAD curves, mL and to estimate the total landslide area. We utilize the total landslide areas obtained from inventories to compare them with our estimations and to validate our methodology. The results show that we calculate landslide magnitudes more accurately than previous methods.

  12. Frequency-dependent moment release of very low frequency earthquakes in the Cascadia subduction zone

    NASA Astrophysics Data System (ADS)

    Takeo, A.; Houston, H.

    2014-12-01

    Episodic tremor and slip (ETS) has been observed in Cascadia subduction zone at two different time scales: tremor at a high-frequency range of 2-8 Hz and slow slip events at a geodetic time-scale of days-months. The intermediate time scale is needed to understand the source spectrum of slow earthquakes. Ghosh et al. (2014, IRIS abs) recently reported the presence of very low frequency earthquakes (VLFEs) in Cascadia. In southwest Japan, VLFEs are usually observed at a period range around 20-50 s, and coincide with tremors (e.g., Ito et al. 2007). In this study, we analyzed VLFEs in and around the Olympic Peninsula to confirm their presence and estimate their moment release. We first detected VLFE events by using broadband seismograms with a band-pass filter of 20-50 s. The preliminary result shows that there are at least 16 VLFE events with moment magnitudes of 3.2-3.7 during the M6.8 2010 ETS. The focal mechanisms are consistent with the thrust earthquakes at the subducting plate interface. To detect signals of VLFEs below noise level, we further stacked long-period waveforms at the peak timings of tremor amplitudes for tremors within a 10-15 km radius by using tremor catalogs in 2006-2010, and estimated the focal mechanisms for each tremor source region as done in southwest Japan (Takeo et al. 2010 GRL). As a result, VLFEs could be detected for almost the entire tremor source region at a period range of 20-50 s with average moment magnitudes in each 5-min tremor window of 2.4-2.8. Although the region is limited, we could also detect VLFEs at a period range of 50-100 s with average moment magnitudes of 3.0-3.2. The moment release at 50-100 s is 4-8 times larger than that at 20-50 s, roughly consistent with an omega-squared spectral model. Further study including tremor, slow slip events and characteristic activities, such as rapid tremor reversal and tremor streaks, will reveal the source spectrum of slow earthquakes in a broader time scale from 0.1 s to days.

  13. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    We present the analysis of simulations at low frequency (<1Hz) of historical and hypothetical earthquakes in Central Mexico, by using a 3D crustal velocity model and an idealized geotechnical structure of the Valley of Mexico. Mexico's destructive earthquake history bolsters the need for a better understanding regarding the seismic hazard and risk of the region. The Mw=8.0 1985 Michoacan earthquake is among the largest natural disasters that Mexico has faced in the last decades; more than 5000 people died and thousands of structures were damaged (Reinoso and Ordaz, 1999). Thus, estimates on the effects of similar or larger magnitude earthquakes on today's population and infrastructure are important. Moreover, Singh and Mortera (1991) suggest that earthquakes of magnitude 8.1 to 8.4 could take place in the so-called Guerrero Gap, an area adjacent to the region responsible for the 1985 earthquake. In order to improve previous estimations of the ground motion (e.g. Furumura and Singh, 2002) and lay the groundwork for a numerical simulation of a hypothetical Guerrero Gap scenario, we recast the 1985 Michoacan earthquake. We used the inversion by Mendoza and Hartzell (1989) and a 3D velocity model built on the basis of recent investigations in the area, which include a velocity structure of the Valley of Mexico constrained by geotechnical and reflection experiments, and noise tomography, receiver functions, and gravity-based regional models. Our synthetic seismograms were computed using the octree-based finite element tool-chain Hercules (Tu et al., 2006), and are valid up to a frequency of 1 Hz, considering realistic velocities in the Valley of Mexico ( >60 m/s in the very shallow subsurface). We evaluated the model's ability to reproduce the available records using the goodness-of-fit analysis proposed by Mayhew and Olsen (2010). Once the reliablilty of the model was established, we estimated the effects of a large magnitude earthquake in Central Mexico. We built a

  14. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  15. Along-strike variations in fault frictional properties along the San Andreas Fault near Cholame, California from joint earthquake and low-frequency earthquake relocations

    USGS Publications Warehouse

    Harrington, Rebecca M.; Cochran, Elizabeth S.; Griffiths, Emily M.; Zeng, Xiangfang; Thurber, Clifford H.

    2016-01-01

    Recent observations of low‐frequency earthquakes (LFEs) and tectonic tremor along the Parkfield–Cholame segment of the San Andreas fault suggest slow‐slip earthquakes occur in a transition zone between the shallow fault, which accommodates slip by a combination of aseismic creep and earthquakes (<15  km depth), and the deep fault, which accommodates slip by stable sliding (>35  km depth). However, the spatial relationship between shallow earthquakes and LFEs remains unclear. Here, we present precise relocations of 34 earthquakes and 34 LFEs recorded during a temporary deployment of 13 broadband seismic stations from May 2010 to July 2011. We use the temporary array waveform data, along with data from permanent seismic stations and a new high‐resolution 3D velocity model, to illuminate the fine‐scale details of the seismicity distribution near Cholame and the relation to the distribution of LFEs. The depth of the boundary between earthquakes and LFE hypocenters changes along strike and roughly follows the 350°C isotherm, suggesting frictional behavior may be, in part, thermally controlled. We observe no overlap in the depth of earthquakes and LFEs, with an ∼5  km separation between the deepest earthquakes and shallowest LFEs. In addition, clustering in the relocated seismicity near the 2004 Mw 6.0 Parkfield earthquake hypocenter and near the northern boundary of the 1857 Mw 7.8 Fort Tejon rupture may highlight areas of frictional heterogeneities on the fault where earthquakes tend to nucleate.

  16. The 1909 Taipei earthquake: implication for seismic hazard in Taipei

    USGS Publications Warehouse

    Kanamori, Hiroo; Lee, William H.K.; Ma, Kuo-Fong

    2012-01-01

    The 1909 April 14 Taiwan earthquake caused significant damage in Taipei. Most of the information on this earthquake available until now is from the written reports on its macro-seismic effects and from seismic station bulletins. In view of the importance of this event for assessing the shaking hazard in the present-day Taipei, we collected historical seismograms and station bulletins of this event and investigated them in conjunction with other seismological data. We compared the observed seismograms with those from recent earthquakes in similar tectonic environments to characterize the 1909 earthquake. Despite the inevitably large uncertainties associated with old data, we conclude that the 1909 Taipei earthquake is a relatively deep (50–100 km) intraplate earthquake that occurred within the subducting Philippine Sea Plate beneath Taipei with an estimated M_W of 7 ± 0.3. Some intraplate events elsewhere in the world are enriched in high-frequency energy and the resulting ground motions can be very strong. Thus, despite its relatively large depth and a moderately large magnitude, it would be prudent to review the safety of the existing structures in Taipei against large intraplate earthquakes like the 1909 Taipei earthquake.

  17. Pulse-like partial ruptures and high-frequency radiation at creeping-locked transition during megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Michel, Sylvain; Avouac, Jean-Philippe; Lapusta, Nadia; Jiang, Junle

    2017-08-01

    Megathrust earthquakes tend to be confined to fault areas locked in the interseismic period and often rupture them only partially. For example, during the 2015 M7.8 Gorkha earthquake, Nepal, a slip pulse propagating along strike unzipped the bottom edge of the locked portion of the Main Himalayan Thrust (MHT). The lower edge of the rupture produced dominant high-frequency (>1 Hz) radiation of seismic waves. We show that similar partial ruptures occur spontaneously in a simple dynamic model of earthquake sequences. The fault is governed by standard laboratory-based rate-and-state friction with the aging law and contains one homogenous velocity-weakening (VW) region embedded in a velocity-strengthening (VS) area. Our simulations incorporate inertial wave-mediated effects during seismic ruptures (they are thus fully dynamic) and account for all phases of the seismic cycle in a self-consistent way. Earthquakes nucleate at the edge of the VW area and partial ruptures tend to stay confined within this zone of higher prestress, producing pulse-like ruptures that propagate along strike. The amplitude of the high-frequency sources is enhanced in the zone of higher, heterogeneous stress at the edge of the VW area.

  18. Pulse-Like Partial Ruptures and High-Frequency Radiation at Creeping-Locked Transition during Megathrust Earthquakes

    NASA Astrophysics Data System (ADS)

    Michel, S. G. R. M.; Avouac, J. P.; Lapusta, N.; Jiang, J.

    2017-12-01

    Megathrust earthquakes tend to be confined to fault areas locked in the interseismic period and often rupture them only partially. For example, during the 2015 M7.8 Gorkha earthquake, Nepal, a slip pulse propagating along strike unzipped the bottom edge of the locked portion of the Main Himalayan Thrust (MHT). The lower edge of the rupture produced dominant high-frequency (>1 Hz) radiation of seismic waves. We show that similar partial ruptures occur spontaneously in a simple dynamic model of earthquake sequences. The fault is governed by standard laboratory-based rate-and-state friction with the ageing law and contains one homogenous velocity-weakening (VW) region embedded in a velocity-strengthening (VS) area. Our simulations incorporate inertial wave-mediated effects during seismic ruptures (they are thus fully dynamic) and account for all phases of the seismic cycle in a self-consistent way. Earthquakes nucleate at the edge of the VW area and partial ruptures tend to stay confined within this zone of higher prestress, producing pulse-like ruptures that propagate along strike. The amplitude of the high-frequency sources is enhanced in the zone of higher, heterogeneous stress at the edge of the VW area.

  19. Modeling temporal changes of low-frequency earthquake bursts near Parkfield, CA

    NASA Astrophysics Data System (ADS)

    Wu, C.; Daub, E. G.

    2016-12-01

    Tectonic tremor and low-frequency earthquakes (LFE) are found in the deeper crust of various tectonic environments in the last decade. LFEs are presumed to be caused by failure of deep fault patches during a slow slip event, and the long-term variation in LFE recurrence could provide crucial insight into the deep fault zone processes that may lead to future large earthquakes. However, the physical mechanisms causing the temporal changes of LFE recurrence are still under debate. In this study, we combine observations of long-term changes in LFE burst activities near Parkfield, CA with a brittle and ductile friction (BDF) model, and use the model to constrain the possible physical mechanisms causing the observed long-term changes in LFE burst activities after the 2004 M6 Parkfield earthquake. The BDF model mimics the slipping of deep fault patches by a spring-drugged block slider with both brittle and ductile friction components. We use the BDF model to test possible mechanisms including static stress imposed by the Parkfield earthquake, changes in pore pressure, tectonic force, afterslip, brittle friction strength, and brittle contact failure distance. The simulation results suggest that changes in brittle friction strength and failure distance are more likely to cause the observed changes in LFE bursts than other mechanisms.

  20. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  1. Evaluation of the statistical evidence for Characteristic Earthquakes in the frequency-magnitude distributions of Sumatra and other subduction zone regions

    NASA Astrophysics Data System (ADS)

    Naylor, M.; Main, I. G.; Greenhough, J.; Bell, A. F.; McCloskey, J.

    2009-04-01

    The Sumatran Boxing Day earthquake and subsequent large events provide an opportunity to re-evaluate the statistical evidence for characteristic earthquake events in frequency-magnitude distributions. Our aims are to (i) improve intuition regarding the properties of samples drawn from power laws, (ii) illustrate using random samples how appropriate Poisson confidence intervals can both aid the eye and provide an appropriate statistical evaluation of data drawn from power-law distributions, and (iii) apply these confidence intervals to test for evidence of characteristic earthquakes in subduction-zone frequency-magnitude distributions. We find no need for a characteristic model to describe frequency magnitude distributions in any of the investigated subduction zones, including Sumatra, due to an emergent skew in residuals of power law count data at high magnitudes combined with a sample bias for examining large earthquakes as candidate characteristic events.

  2. Modelling the Time Dependence of Frequency Content of Long-period Volcanic Earthquakes

    NASA Astrophysics Data System (ADS)

    Jousset, P.; Neuberg, J. W.

    2001-12-01

    Broad-band seismic networks provide a powerfull tool for the observation and analysis of volcanic earthquakes. The amplitude spectrogram allows us to follow the frequency content of these signals with time. Observed amplitude spectrograms of long-period volcanic earthquakes display distinct spectral lines sometimes varying by several Hertz over time spans of minutes to hours. We first present several examples associated with various phases of volcanic activity at Soufrière Hills volcano, Montserrat. Then, we present and discuss two mechanisms to explain such frequency changes in the spectrograms: (i) change of physical properties within the magma and, (ii) change in the triggering frequency of repeated sources within the conduit. We use 2D and 3D finite-difference modelling methods to compute the propagation of seismic waves in simplified volcanic structures: (i) we model the gliding spectral lines by introducing continuously changing magma properties during the wavefield computation; (ii) we explore the resulting pressure distribution within the conduit and its potential role in triggering further events. We obtain constraints on both amplitude and time-scales for changes of magma properties that are required to model gliding lines in amplitude spectrograms.

  3. Tectonic Tremor and the Collective Behavior of Low-Frequency Earthquakes

    NASA Astrophysics Data System (ADS)

    Frank, W.; Shapiro, N.; Husker, A. L.; Kostoglodov, V.; Campillo, M.; Gusev, A. A.

    2015-12-01

    Tectonic tremor, a long duration, emergent seismic signal observed along the deep roots of plate interfaces, is thought to be the superposition of repetitive shear events called low-frequency earthquakes (LFE) [e.g. Shelly et al., Nature, 2007]. We use a catalog of more than 1.8 million LFEs regrouped into more than 1000 families observed over 2 years in the Guerrero subduction zone in Mexico, considering each family as an individual repetitive source or asperity. We develop a statistical analysis to determine whether the subcatalogs corresponding to different sources represent random Poisson processes or if they exhibit scale-invariant clustering in time, which we interpret as a manifestation of collective behavior. For each individual LFE source, we compare their level of collective behavior during two time periods: during the six-month-long 2006 Mw 7.5 slow-slip event and during a calm period with no observed slow slip. We find that the collective behavior of LFEs depends on distance from the trench and increases when the subduction interface is slowly slipping. Our results suggest that the occurrence of strong episodes of tectonic tremors cannot be simply explained by increased rates of low frequency earthquakes at every individual LFE source but correspond to an enhanced collective behavior of the ensemble of LFE asperities.

  4. Shallow very-low-frequency earthquakes accompany slow slip events in the Nankai subduction zone.

    PubMed

    Nakano, Masaru; Hori, Takane; Araki, Eiichiro; Kodaira, Shuichi; Ide, Satoshi

    2018-03-14

    Recent studies of slow earthquakes along plate boundaries have shown that tectonic tremor, low-frequency earthquakes, very-low-frequency events (VLFEs), and slow-slip events (SSEs) often accompany each other and appear to share common source faults. However, the source processes of slow events occurring in the shallow part of plate boundaries are not well known because seismic observations have been limited to land-based stations, which offer poor resolution beneath offshore plate boundaries. Here we use data obtained from seafloor observation networks in the Nankai trough, southwest of Japan, to investigate shallow VLFEs in detail. Coincident with the VLFE activity, signals indicative of shallow SSEs were detected by geodetic observations at seafloor borehole observatories in the same region. We find that the shallow VLFEs and SSEs share common source regions and almost identical time histories of moment release. We conclude that these slow events arise from the same fault slip and that VLFEs represent relatively high-frequency fluctuations of slip during SSEs.

  5. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  6. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl

    2017-04-01

    A seismic moment tensor is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw < 2.5) events at Uturuncu volcano in Bolivia, moderate (Mw > 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.

  7. The Detection of Very Low Frequency Earthquake using Broadband Seismic Array Data in South-Western Japan

    NASA Astrophysics Data System (ADS)

    Ishihara, Y.; Yamanaka, Y.; Kikuchi, M.

    2002-12-01

    The existences of variety of low-frequency seismic sources are obvious by the dense and equalized equipment_fs seismic network. Kikuchi(2000) and Kumagai et.al. (2001) analyzed about 50sec period ground motion excited by the volcanic activities Miyake-jima, Izu Islands. JMA is listing the low frequency earthquakes routinely in their hypocenter determination. Obara (2002) detected the low frequency, 2-4 Hz, tremor that occurred along subducting Philippine Sea plate by envelope analysis of high dense and short period seismic network (Hi-net). The monitoring of continuos long period waveform show us the existence of many unknown sources. Recently, the broadband seismic network of Japan (F-net, previous name is FREESIA) is developed and extends to linear array about 3,000 km. We reviewed the long period seismic data and earthquake catalogues. Many candidates, which are excited by unknown sources, are picked up manually. The candidates are reconfirmed in detail by the original seismograms and their rough frequency characteristics are evaluated. Most events have the very low frequency seismograms that is dominated period of 20 _E30 sec and smaller amplitude than ground noise level in shorter period range. We developed the hypocenter determination technique applied the grid search method. Moreover for the major events moment tensor inversion was performed. The most source locates at subducting plate and their depth is greater than 30km. However the location don_ft overlap the low frequency tremor source region. Major event_fs moment magnitude is 4 or greater and estimated source time is around 20 sec. We concluded that low frequency seismic event series exist in wide period range in subduction area. The very low frequency earthquakes occurred along Nankai and Ryukyu trough at southwestern Japan. We are planing to survey the very low frequency event systematically in wider western Pacific region.

  8. A source migration of low frequency earthquakes during the 2000 activity of Miyake-jima volcano, Japan

    NASA Astrophysics Data System (ADS)

    Kobayashi, T.; Ohminato, T.; Fujita, E.; Ida, Y.

    2002-12-01

    The volcanic activity of Miyake-jima started at 18:30 (JST) on June 26, 2000 with large ground deformation and earthquake swarms. The seismic activity started at the southern part of the island. The hypocenter distribution migrated northwestward and slipped away out of the island by early in the morning, June 27. Low frequency (LF) earthquakes with dominant frequencies of 0.2 and 0.4 Hz were first observed in the afternoon of June 27. The LF activity lasted till the first summit eruption on July 8. Earthquake Research Institute of Tokyo University and National Research Institute for Earth Science and Disaster Prevention deployed 3 CMG-3T and 4 STS-2 broadband seismometers in the island. More than 300 LF earthquakes are detected during the period from June 27 to July 8. Most of the LF events whose dominant frequency is 0.2Hz occurred before July 1, while LF events with dominant frequency of 0.4Hz mainly occurred after July 2. We determine hypocenters of these LF events by using the following technique. For each LF event, we assume a source location on a grid point in a homogeneous half-space. A reference station is chosen among all the stations. The cross correlation coefficients are computed between the waveform of the reference station and those of other stations. Then, the coefficients for all the stations are summed. In the same manner, summations of the coefficients are computed grid by grid. A grid point that gives the maximum value of the sum of the coefficients is regarded as the best estimate of the source location of the LF event under consideration. The result shows that hypocenters of LF events are spread over the southern to western part of the island and they migrate from south to the west day by day. Hypocenter migrations associated with volcanic activity have been often reported but usually for short period events. This is one of remarkable cases in which a migration of earthquakes with dominant frequencies as low as 0.2 and 0.4Hz are clearly

  9. Complex frequency analysis Tornillo earthquake Lokon Volcano in North Sulawesi period 1 January-17 March 2016

    NASA Astrophysics Data System (ADS)

    Hasanah, Intan; Syahbana, Devy Kamil; Santoso, Agus; Palupi, Indriati Retno

    2017-07-01

    Indonesia consists of 127 active volcanoes, that causing Indonesia has a very active seismic activity. The observed temporal variation in the complex frequency analysis of Tornillo earthquake in this study at Lokon Volcano, North Sulawesi occured during the period from January 1 to March 17, 2016. This research was conducted using the SOMPI method, with parameters of complex frequency is oscillation frequency (f) and decay coda character of wave (Q Factor). The purpose of this research was to understand the condition of dynamics of fluids inside Lokon Volcano in it's period. The analysis was based on the Sompi homogeneous equation Auto-Regressive (AR). The results of this study were able to estimate the dynamics of fluids inside Lokon Volcano and identify the content of the fluid and dynamics dimension crust. Where the Tornillo earthquake in this period has a value of Q (decay waves) are distributed under 200 and frequency distributed between 3-4 Hz. Tornillo earthquake was at a shallow depth of less than 2 km and paraded to the Tompaluan Crater. From the analysis of complex frequencies, it can be estimated if occured an eruption at Lokon Volcano in it's period, the estimated type of eruption was phreatic eruption. With an estimated composition of the fluid in the form of Misty Gas a mass fraction of gas ranging between 0-100%. Another possible fluid contained in Lokon Volcano is water vapor with the gas volume fraction range 10-90%.

  10. Study of Low-Frequency Earth motions from Earthquakes and a Hurricane using a Modified Standard Seismometer

    NASA Astrophysics Data System (ADS)

    Peters, R. D.

    2004-12-01

    The modification of a WWSSN Sprengnether vertical seismometer has resulted in significantly improved performance at low frequencies. Instead of being used as a velocity detector as originally designed, the Faraday subsystem is made to function as an actuator to provide a type of force feedback. Added to the instrument to detect ground motions is an array form of the author's symmetric differential capacitive (SDC) sensor. The feedback circuit is not conventional, but rather is used to eliminate long-term drift by placing between sensor and actuator an operational amplifier integrator having a time constant of several thousand seconds. Signal to noise ratio at low frequencies is increased, since the modified instrument does not suffer from the 20dB/decade falloff in sensitivity that characterizes conventional force-feedback seismometers. A Hanning-windowed FFT algorithm is employed in the analysis of recorded earthquakes, including that of the very large Indonesia earthquake (M 7.9) of 25 July 2004. The improved low frequency response allows the study of the free oscillations of the Earth that accompany large earthquakes. Data will be provided showing oscillations with spectral components in the vicinity of 1 mHz, that frequently have been observed with this instrument to occur both before as well as after an earthquake. Additionally, microseisms and other interesting data will be shown from records collected by the instrument as Hurricane Charley moved across Florida and up the eastern seaboard.

  11. Investigation of Backprojection Uncertainties With M6 Earthquakes

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; Shearer, Peter M.

    2017-10-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P wave backprojection imaging of large earthquake ruptures. These errors occur because empirically estimated time shifts based on aligning P wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-M7 earthquakes over a 10 year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross correlation of their initial P wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare backprojection images for each earthquake using its own timing corrections with those obtained using the time corrections from other earthquakes. This provides a measure of how well subevents can be resolved with backprojection of a large rupture as a function of distance from the hypocenter. Our results show that backprojection is generally very robust and that the median subevent location error is about 25 km across the entire study region (˜700 km). The backprojection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3-D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine backprojection images using aftershock calibration, at least in this region.

  12. Spectral Estimation of Seismic Moment, Corner Frequency and Radiated Energy for Earthquakes in the Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Satriano, C.; Mejia Uquiche, A. R.; Saurel, J. M.

    2016-12-01

    The Lesser Antilles are situated at a convergent plate boundary where the North- and South-American plates subduct below the Caribbean Plate at a rate of about 2 cm/y. The subduction forms the volcanic arc of Lesser Antilles and generates three types of seismicity: subduction earthquakes at the plate interface, intermediate depth earthquakes within the subducting oceanic plates and crustal earthquakes associated with the deformation of the Caribbean Plate. Even if the seismicity rate is moderate, this zone has generated in the past major earthquakes, like the subduction event on February 8, 1843, estimated M 8.5 (Beauducel et Feuillet, 2012), the Mw 6.3 "Les Saintes" crustal earthquake of November 24, 2004 (Drouet et al., 2011), and the Mw 7.4 Martinique intermediate earthquake of November 29, 2007 (Bouin et al., 2010). The seismic catalogue produced by the Volcanological and Seismological Observatories of Guadeloupe and Martinique comprises about 1000 events per year, most of them of moderate magnitude (M < 5.0). The observation and characterization of this background seismicity has a fundamental role in understanding the processes of energy accumulation and liberation preparing major earthquakes. For this reason, the catalogue needs to be completed by information like seismic moment, corner frequency and radiated energy which give access to important fault properties like the rupture size, the static and the apparent stress drop. So far, this analysis has only been performed for the "Les Saintes" sequence (Drouet et al., 2011). Here we present a systematic study of the Lesser Antilles merged seismic catalogue (http://www.seismes-antilles.fr), between 2002 and 2013, using broadband data from the West Indies seismic network and recordings from the French Accelerometric Network. The analysis is aimed at determining, from the inversion of S-wave displacement spectra, source parameters like seismic moment, corner frequency and radiated energy, as well as the inelastic

  13. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    NASA Astrophysics Data System (ADS)

    Alvizuri, Celso R.

    rather the confidence, is then given by the 'confidence curve' P( V), where P(V) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V. The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M0. We apply the method to data from events in different regions and tectonic settings: 63 small (M w 4) earthquakes in the southern Alaska subduction zone, and 12 earthquakes and 17 nuclear explosions at the Nevada Test Site. Characterization of moment tensor uncertainties puts us in better position to discriminate among moment tensor source types and to assign physical processes to the events.

  14. Rupture process of the M 7.9 Denali fault, Alaska, earthquake: Subevents, directivity, and scaling of high-frequency ground motions

    USGS Publications Warehouse

    Frankel, A.

    2004-01-01

    Displacement waveforms and high-frequency acceleration envelopes from stations at distances of 3-300 km were inverted to determine the source process of the M 7.9 Denali fault earthquake. Fitting the initial portion of the displacement waveforms indicates that the earthquake started with an oblique thrust subevent (subevent # 1) with an east-west-striking, north-dipping nodal plane consistent with the observed surface rupture on the Susitna Glacier fault. Inversion of the remainder of the waveforms (0.02-0.5 Hz) for moment release along the Denali and Totschunda faults shows that rupture proceeded eastward on the Denali fault, with two strike-slip subevents (numbers 2 and 3) centered about 90 and 210 km east of the hypocenter. Subevent 2 was located across from the station at PS 10 (Trans-Alaska Pipeline Pump Station #10) and was very localized in space and time. Subevent 3 extended from 160 to 230 km east of the hypocenter and had the largest moment of the subevents. Based on the timing between subevent 2 and the east end of subevent 3, an average rupture velocity of 3.5 km/sec, close to the shear wave velocity at the average rupture depth, was found. However, the portion of the rupture 130-220 km east of the epicenter appears to have an effective rupture velocity of about 5.0 km/ sec, which is supershear. These two subevents correspond approximately to areas of large surface offsets observed after the earthquake. Using waveforms of the M 6.7 Nenana Mountain earthquake as empirical Green's functions, the high-frequency (1-10 Hz) envelopes of the M 7.9 earthquake were inverted to determine the location of high-frequency energy release along the faults. The initial thrust subevent produced the largest high-frequency energy release per unit fault length. The high-frequency envelopes and acceleration spectra (>0.5 Hz) of the M 7.9 earthquake can be simulated by chaining together rupture zones of the M 6.7 earthquake over distances from 30 to 180 km east of the

  15. Quantification of uncertainties of the tsunami risk in Cascadia

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Sarri, A.; Day, S. J.; Liu, X.; Dias, F.

    2013-12-01

    We first show new realistic simulations of earthquake-generated tsunamis in Cascadia (Western Canada and USA) using VOLNA. VOLNA is a solver of nonlinear shallow water equations on unstructured meshes that is accelerated on the new GPU system Emerald. Primary outputs from these runs are tsunami inundation maps, accompanied by site-specific wave trains and flow velocity histories. The variations in inputs (here seabed deformations due to earthquakes) are time-varying shapes difficult to sample, and they require an integrated statistical and geophysical analysis. Furthermore, the uncertainties in the bathymetry require extensive investigation and optimization of the resolutions at the source and impact. Thus we need to run VOLNA for well chosen combinations of the inputs and the bathymetry to reflect the various sources of uncertainties, and we interpolate in between using a so-called statistical emulator that keeps track of the additional uncertainties due to the interpolation itself. We present novel adaptive sequential designs that enable such choices of the combinations for our Gaussian Process (GP) based emulator in order to maximize the information from the limited number of runs of VOLNA that can be computed. GPs show strength in the approximation of the response surface but suffer from large computer costs associated with the fitting. Hence, a careful selection of the inputs is necessary to optimize the trade-off fit versus computations. Finally, we also propose to assess the frequencies and intensities of the earthquakes along the Cascadia subduction zone that have been demonstrated by geological palaeoseismic, palaeogeodetic and tsunami deposit studies in Cascadia. As a result, the hazard assessment aims to reflect the multiple non-linearities and uncertainties for the tsunami risk in Cascadia.

  16. Models of tremor and low-frequency earthquake swarms on Montserrat

    NASA Astrophysics Data System (ADS)

    Neuberg, J.; Luckett, R.; Baptie, B.; Olsen, K.

    2000-08-01

    Recent observations from Soufrière Hills volcano in Montserrat reveal a wide variety of low-frequency seismic signals. We discuss similarities and differences between hybrid earthquakes and long-period events, and their role in explosions and rockfall events. These events occur usually in swarms, and occasionally merge into tremor, an observation that can shed further light on the generation and composition of harmonic tremor. We use a 2D finite difference method to model major features of low-frequency seismic signatures and compare them with the observations. A depth-dependent velocity model for a fluid-filled conduit is introduced which accounts for the varying gas-content in the magma, and the impact on the seismic signals is discussed. We carefully analyse episodes of tremor that show shifting spectral lines and model those in terms of changes in the gas content of the magma as well as in terms of a time-dependent triggering mechanism of low-frequency resonances. In this way we explain the simultaneous occurrence of low-frequency events and tremor with a spectral content comprising integer harmonics.

  17. Deep low-frequency earthquakes in tremor localize to the plate interface in multiple subduction zones

    USGS Publications Warehouse

    Brown, Justin R.; Beroza, Gregory C.; Ide, Satoshi; Ohta, Kazuaki; Shelly, David R.; Schwartz, Susan Y.; Rabbel, Wolfgang; Thorwart, M.; Kao, Honn

    2009-01-01

    Deep tremor under Shikoku, Japan, consists primarily, and perhaps entirely, of swarms of low-frequency earthquakes (LFEs) that occur as shear slip on the plate interface. Although tremor is observed at other plate boundaries, the lack of cataloged low-frequency earthquakes has precluded a similar conclusion about tremor in those locales. We use a network autocorrelation approach to detect and locate LFEs within tremor recorded at three subduction zones characterized by different thermal structures and levels of interplate seismicity: southwest Japan, northern Cascadia, and Costa Rica. In each case we find that LFEs are the primary constituent of tremor and that they locate on the deep continuation of the plate boundary. This suggests that tremor in these regions shares a common mechanism and that temperature is not the primary control on such activity.

  18. Low frequency tremors in the Tonankai accretionary prism, triggered by the 2011 Tohoku-Oki earthquake

    NASA Astrophysics Data System (ADS)

    To, A.; Obana, K.; Takahashi, N.; Fukao, Y.

    2012-12-01

    There have been many reports of triggered tremors and micro-earthquakes, by the 2011 Tohoku-Oki earthquake, most of which are based on land observations. Here, we report that numerous low frequency tremors are recorded by broadband ocean-bottom seismographs of DONET, a network of cabled observatory systems deployed in the Tonankai accretionary prism of the Nankai trough. Ten stations were in operation at the time of the earthquake. The tremors are observed at five of the stations, which are located on the landward slope of the Nankai trough. On the other hand, the signals are weak at stations near the coast, which are placed on the Kumano Forarc basin. The tremors are dominant in a frequency range of 1-10Hz. Their duration ranges from tens of seconds to a few minutes. More than 20 events per hour can be detected in the first few days after the earthquake. The activity continues about three weeks with a decrease in the frequency of occurrence. An intriguing feature of the observed tremors is that some of them have a very low frequency (VLF) component, most clearly visible between 0.02 and 0.05 Hz. We found 74 such events within 5 days after the great earthquake. For each event, the VLF signal is detected only at one station in contrast to the high frequency signal (2-8Hz), which can be observed at more than a few stations. We estimated the source location of the VLF events, by measuring the onset of envelope seismograms constructed from the high frequency (2-8Hz) horizontal component. Due to the unclear onset and the limited number of observable stations per event, the individual events were located with large location errors. Therefore, we assumed that 11 of the events, whose VLF waveforms are similar to each other with high correlation coefficient (> 0.92), are co-located. The measured travel times for the 11 events are compared and some outliers were discarded. We grid-searched through a 3-D S-wave velocity model for the event location, which minimizes the travel

  19. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  20. Quantifying and Qualifying USGS ShakeMap Uncertainty

    USGS Publications Warehouse

    Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent

    2008-01-01

    We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions

  1. Structure of the tsunamigenic plate boundary and low-frequency earthquakes in the southern Ryukyu Trench

    PubMed Central

    Arai, Ryuta; Takahashi, Tsutomu; Kodaira, Shuichi; Kaiho, Yuka; Nakanishi, Ayako; Fujie, Gou; Nakamura, Yasuyuki; Yamamoto, Yojiro; Ishihara, Yasushi; Miura, Seiichi; Kaneda, Yoshiyuki

    2016-01-01

    It has been recognized that even weakly coupled subduction zones may cause large interplate earthquakes leading to destructive tsunamis. The Ryukyu Trench is one of the best fields to study this phenomenon, since various slow earthquakes and tsunamis have occurred; yet the fault structure and seismic activity there are poorly constrained. Here we present seismological evidence from marine observation for megathrust faults and low-frequency earthquakes (LFEs). On the basis of passive observation we find LFEs occur at 15–18 km depths along the plate interface and their distribution seems to bridge the gap between the shallow tsunamigenic zone and the deep slow slip region. This suggests that the southern Ryukyu Trench is dominated by slow earthquakes at any depths and lacks a typical locked zone. The plate interface is overlaid by a low-velocity wedge and is accompanied by polarity reversals of seismic reflections, indicating fluids exist at various depths along the plate interface. PMID:27447546

  2. Uncertainty, variability, and earthquake physics in ground‐motion prediction equations

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.

    2017-01-01

    Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20  km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.

  3. High Frequency Cut-off Characteristics of Strong Ground Motion Records at Hard Sites, Subduction and Intra-Slab Earthquakes

    NASA Astrophysics Data System (ADS)

    Kagawa, T.; Tsurugi, M.; Irikura, K.

    2006-12-01

    A study on high frequency cut-off characteristics of strong ground motion is presented for subduction and intra- slab earthquakes in Japan. In the latest decade, observed records at hard sites are published by NIED, National Research Institute for Earth Science and Disaster Prevention, and JCOLD, Japan Commission on Large Dams. Especially, KiK-net and K-NET maintained by NIED have been providing high quality data to study high-frequency characteristics. Kagawa et al.(2003) studied the characteristics for crustal earthquakes. We apply the same methodology to the recently observed Japanese records due to subduction and intra-slab earthquakes. We assume a Butterworth type high-cut filter with limit frequency (fmax) and its power factor. These two parameters were derived from Fourier spectrum of observed records fitting the theoretical filter shape. After analyzing the result from view points of site, path, or source effects, an averaged filter model is proposed with its standard deviation. Kagawa et al.(2003) derived average as 8.3 Hz with power factor of 1.92. It is used for strong ground motion simulation. We will propose parameters for the high-cut filters of subduction and intra-slab earthquakes and compare them with the results by Kagawa et al.(2003). REFERENCES: Kagawa et al. (2003), 27JEES (in Japanese with English Abstract).

  4. Mitigating artifacts in back-projection source imaging with implications for frequency-dependent properties of the Tohoku-Oki earthquake

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen; Ampuero, Jean-Paul; Luo, Yingdi; Wu, Wenbo; Ni, Sidao

    2012-12-01

    Comparing teleseismic array back-projection source images of the 2011 Tohoku-Oki earthquake with results from static and kinematic finite source inversions has revealed little overlap between the regions of high- and low-frequency slip. Motivated by this interesting observation, back-projection studies extended to intermediate frequencies, down to about 0.1 Hz, have suggested that a progressive transition of rupture properties as a function of frequency is observable. Here, by adapting the concept of array response function to non-stationary signals, we demonstrate that the "swimming artifact", a systematic drift resulting from signal non-stationarity, induces significant bias on beamforming back-projection at low frequencies. We introduce a "reference window strategy" into the multitaper-MUSIC back-projection technique and significantly mitigate the "swimming artifact" at high frequencies (1 s to 4 s). At lower frequencies, this modification yields notable, but significantly smaller, artifacts than time-domain stacking. We perform extensive synthetic tests that include a 3D regional velocity model for Japan. We analyze the recordings of the Tohoku-Oki earthquake at the USArray and at the European array at periods from 1 s to 16 s. The migration of the source location as a function of period, regardless of the back-projection methods, has characteristics that are consistent with the expected effect of the "swimming artifact". In particular, the apparent up-dip migration as a function of frequency obtained with the USArray can be explained by the "swimming artifact". This indicates that the most substantial frequency-dependence of the Tohoku-Oki earthquake source occurs at periods longer than 16 s. Thus, low-frequency back-projection needs to be further tested and validated in order to contribute to the characterization of frequency-dependent rupture properties.

  5. High-frequency source radiation during the 2011 Tohoku-Oki earthquake, Japan, inferred from KiK-net strong-motion seismograms

    NASA Astrophysics Data System (ADS)

    Kumagai, Hiroyuki; Pulido, Nelson; Fukuyama, Eiichi; Aoi, Shin

    2013-01-01

    investigate source processes of the 2011 Tohoku-Oki earthquake, we utilized a source location method using high-frequency (5-10 Hz) seismic amplitudes. In this method, we assumed far-field isotropic radiation of S waves, and conducted a spatial grid search to find the best fitting source locations along the subducted slab in each successive time window. Our application of the method to the Tohoku-Oki earthquake resulted in artifact source locations at shallow depths near the trench caused by limited station coverage and noise effects. We then assumed various source node distributions along the plate, and found that the observed seismograms were most reasonably explained when assuming deep source nodes. This result suggests that the high-frequency seismic waves were radiated at deeper depths during the earthquake, a feature which is consistent with results obtained from teleseismic back-projection and strong-motion source model studies. We identified three high-frequency subevents, and compared them with the moment-rate function estimated from low-frequency seismograms. Our comparison indicated that no significant moment release occurred during the first high-frequency subevent and the largest moment-release pulse occurred almost simultaneously with the second high-frequency subevent. We speculated that the initial slow rupture propagated bilaterally from the hypocenter toward the land and trench. The landward subshear rupture propagation consisted of three successive high-frequency subevents. The trenchward propagation ruptured the strong asperity and released the largest moment near the trench.

  6. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  7. Investigation of Back-Projection Uncertainties with M6 Earthquakes

    NASA Astrophysics Data System (ADS)

    Fan, W.; Shearer, P. M.

    2017-12-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P-wave back-projection imaging of large earthquake ruptures. These errors occur because empirically-estimated time shifts based on aligning P-wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-7 earthquakes over a ten-year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross-correlation of their initial P-wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare back-projection images for each earthquake using its own timing corrections with those obtained using the time corrections for other earthquakes. This provides a measure of how well sub-events can be resolved with back-projection of a large rupture as a function of distance from the hypocenter. Our results show that back-projection is generally very robust and that sub-event location errors average about 20 km across the entire study region ( 700 km). The back-projection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine back-projection images using aftershock calibration, at least in this region.

  8. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  9. Parameter uncertainty and nonstationarity in regional extreme rainfall frequency analysis in Qu River Basin, East China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Gu, H.

    2014-12-01

    Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management

  10. Spatial-temporal variation of low-frequency earthquake bursts near Parkfield, California

    USGS Publications Warehouse

    Wu, Chunquan; Guyer, Robert; Shelly, David R.; Trugman, D.; Frank, William; Gomberg, Joan S.; Johnson, P.

    2015-01-01

    Tectonic tremor (TT) and low-frequency earthquakes (LFEs) have been found in the deeper crust of various tectonic environments globally in the last decade. The spatial-temporal behaviour of LFEs provides insight into deep fault zone processes. In this study, we examine recurrence times from a 12-yr catalogue of 88 LFE families with ∼730 000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault (SAF) in central California. We apply an automatic burst detection algorithm to the LFE recurrence times to identify the clustering behaviour of LFEs (LFE bursts) in each family. We find that the burst behaviours in the northern and southern LFE groups differ. Generally, the northern group has longer burst duration but fewer LFEs per burst, while the southern group has shorter burst duration but more LFEs per burst. The southern group LFE bursts are generally more correlated than the northern group, suggesting more coherent deep fault slip and relatively simpler deep fault structure beneath the locked section of SAF. We also found that the 2004 Parkfield earthquake clearly increased the number of LFEs per burst and average burst duration for both the northern and the southern groups, with a relatively larger effect on the northern group. This could be due to the weakness of northern part of the fault, or the northwesterly rupture direction of the Parkfield earthquake.

  11. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

  12. Long-term change of activity of very low-frequency earthquakes in southwest Japan

    NASA Astrophysics Data System (ADS)

    Baba, S.; Takeo, A.; Obara, K.; Kato, A.; Maeda, T.; Matsuzawa, T.

    2017-12-01

    On plate interface near seismogenic zone of megathrust earthquakes, various types of slow earthquakes were detected including non-volcanic tremors, slow slip events (SSEs) and very low-frequency earthquakes (VLFEs). VLFEs are classified into deep VLFEs, which occur in the downdip side of the seismogenic zone, and shallow VLFEs, occur in the updip side, i.e. several kilometers in depth in southwest Japan. As a member of slow earthquake family, VLFE activity is expected to be a proxy of inter-plate slipping because VLFEs have the same mechanisms as inter-plate slipping and are detected during Episodic tremor and slip (ETS). However, long-term change of the VLFE seismicity has not been well constrained compared to deep low-frequency tremor. We thus studied long-term changes in the activity of VLFEs in southwest Japan where ETS and long-term SSEs have been most intensive. We used continuous seismograms of F-net broadband seismometers operated by NIED from April 2004 to March 2017. After applying the band-pass filter with a frequency range of 0.02—0.05 Hz, we adopted the matched-filter technique in detecting VLFEs. We prepared templates by calculating synthetic waveforms for each hypocenter grid assuming typical focal mechanisms of VLFEs. The correlation coefficients between templates and continuous F-net seismograms were calculated at each grid every 1s in all components. The grid interval is 0.1 degree for both longitude and latitude. Each VLFE was detected as an event if the average of correlation coefficients exceeds the threshold. We defined the detection threshold as eight times as large as the median absolute deviation of the distribution. At grids in the Bungo channel, where long-term SSEs occurred frequently, the cumulative number of detected VLFEs increases rapidly in 2010 and 2014, which were modulated by stress loading from the long-term SSEs. At inland grids near the Bungo channel, the cumulative number increases steeply every half a year. This stepwise

  13. Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh

    NASA Astrophysics Data System (ADS)

    Mortuza, M. R.; Demissie, Y.; Li, H. Y.

    2014-12-01

    Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.

  14. High-frequency ground motion amplification during the 2011 Tohoku earthquake explained by soil dilatancy

    NASA Astrophysics Data System (ADS)

    Roten, D.; Fäh, D.; Bonilla, L. F.

    2013-05-01

    Ground motions of the 2011 Tohoku earthquake recorded at Onahama port (Iwaki, Fukushima prefecture) rank among the highest accelerations ever observed, with the peak amplitude of the 3-D acceleration vector approaching 2g. The response of the site was distinctively non-linear, as indicated by the presence of horizontal acceleration spikes which have been linked to cyclic mobility during similar observations. Compared to records of weak ground motions, the response of the site during the Mw 9.1 earthquake was characterized by increased amplification at frequencies above 10 Hz and in peak ground acceleration. This behaviour contrasts with the more common non-linear response encountered at non-liquefiable sites, which results in deamplification at higher frequencies. We simulate propagation of SH waves through the dense sand deposit using a non-linear finite difference code that is capable of modelling the development of excess pore water pressure. Dynamic soil parameters are calibrated using a direct search method that minimizes the difference between observed and simulated acceleration envelopes and response spectra. The finite difference simulations yield surface acceleration time-series that are consistent with the observations in shape and amplitude, pointing towards soil dilatancy as a likely explanation for the high-frequency pulses recorded at Onahama port. The simulations also suggest that the occurrence of high-frequency spikes coincided with a rapid increase in pore water pressure in the upper part of the sand deposit between 145 and 170 s. This sudden increase is possibly linked to a burst of high-frequency energy from a large slip patch below the Iwaki region.

  15. High-frequency seismic energy radiation from the 2003 Miyagi-Oki, JAPAN, earthquake (M7.0) as revealed from an envelope inversion analysis

    NASA Astrophysics Data System (ADS)

    Nakahara, H.

    2003-12-01

    The 2003 Miyagi-Oki earthquake (M 7.0) took place on May 26, 2003 in the subducting Pacific plate beneath northeastern Japan. The focal depth is around 70km. The focal mechanism is reverse type on a fault plane dipping to the west with a high angle. There was no fatality, fortunately. However, this earthquake caused more than 100 injures, 2000 collapsed houses, and so on. To the south of this focal area by about 50km, an interplate earthquake of M7.5, the Miyagi-Ken-Oki earthquake, is expected to occur in the near future. So the relation between this earthquake and the expected Miyagi-Ken-Oki earthquake attracts public attention. Seismic-energy distribution on earthquake fault planes estimated by envelope inversion analyses can contribute to better understanding of the earthquake source process. For moderate to large earthquakes, seismic energy in frequencies higher than 1 Hz is sometimes much larger than a level expected from the omega-squared model with source parameters estimated by lower-frequency analyses. Therefore, an accurate estimation of seismic energy in such high frequencies has significant importance on estimation of dynamic source parameters such as the seismic energy or the apparent stress. In this study, we execute an envelope inversion analysis based on the method by Nakahara et al. (1998) and clarify the spatial distribution of high-frequency seismic energy radiation on the fault plane of this earthquake. We use three-component sum of mean squared velocity seismograms multiplied by a density of earth medium, which is called envelopes here, for the envelope inversion analysis. Four frequency bands of 1-2, 2-4, 4-8, and 8-16 Hz are adopted. We use envelopes in the time window from the onset of S waves to the lapse time of 51.2 sec. Green functions of envelopes representing the energy propagation process through a scattering medium are calculated based on the radiative transfer theory, which are characterized by parameters of scattering attenuation

  16. Waveform inversion in the frequency domain for the simultaneous determination of earthquake source mechanism and moment function

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Inoue, H.

    2008-06-01

    We propose a method of waveform inversion to rapidly and routinely estimate both the moment function and the centroid moment tensor (CMT) of an earthquake. In this method, waveform inversion is carried out in the frequency domain to obtain the moment function more rapidly than when solved in the time domain. We assume a pure double-couple source mechanism in order to stabilize the solution when using data from a small number of seismic stations. The fault and slip orientations are estimated by a grid search with respect to the strike, dip and rake angles. The moment function in the time domain is obtained from the inverse Fourier transform of the frequency components determined by the inversion. Since observed waveforms used for the inversion are limited in a particular frequency band, the estimated moment function is a bandpassed form. We develop a practical approach to estimate the deconvolved form of the moment function, from which we can reconstruct detailed rupture history and the seismic moment. The source location is determined by a spatial grid search using adaptive grid spacings, which are gradually decreased in each step of the search. We apply this method to two events that occurred in Indonesia by using data from a broad-band seismic network in Indonesia (JISNET): one northeast of Sulawesi (Mw = 7.5) on 2007 January 21, and the other south of Java (Mw = 7.5) on 2006 July 17. The source centroid locations and mechanisms we estimated for both events are consistent with those determined by the Global CMT Project and the National Earthquake Information Center of the U.S. Geological Survey. The estimated rupture duration of the Sulawesi event is 16 s, which is comparable to a typical duration for earthquakes of this magnitude, while that of the Java event is anomalously long (176 s), suggesting that this event was a tsunami earthquake. Our application demonstrates that this inversion method has great potential for rapid and routine estimations of both the

  17. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  18. The ratio between corner frequencies of source spectra of P- and S-waves—a new discriminant between earthquakes and quarry blasts

    NASA Astrophysics Data System (ADS)

    Ataeva, G.; Gitterman, Y.; Shapira, A.

    2017-01-01

    This study analyzes and compares the P- and S-wave displacement spectra from local earthquakes and explosions of similar magnitudes. We propose a new approach to discrimination between low-magnitude shallow earthquakes and explosions by using ratios of P- to S-wave corner frequencies as a criterion. We have explored 2430 digital records of the Israeli Seismic Network (ISN) from 456 local events (226 earthquakes, 230 quarry blasts, and a few underwater explosions) of magnitudes Md = 1.4-3.4, which occurred at distances up to 250 km during 2001-2013 years. P-wave and S-wave displacement spectra were computed for all events following Brune's source model of earthquakes (1970, 1971) and applying the distance correction coefficients (Shapira and Hofstetter, Teconophysics 217:217-226, 1993; Ataeva G, Shapira A, Hofstetter A, J Seismol 19:389-401, 2015), The corner frequencies and moment magnitudes were determined using multiple stations for each event, and then the comparative analysis was performed.

  19. FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.

    USGS Publications Warehouse

    Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.

    1985-01-01

    The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.

  20. Acceleration spectra for subduction zone earthquakes

    USGS Publications Warehouse

    Boatwright, J.; Choy, G.L.

    1989-01-01

    We estimate the source spectra of shallow earthquakes from digital recordings of teleseismic P wave groups, that is, P+pP+sP, by making frequency dependent corrections for the attenuation and for the interference of the free surface. The correction for the interference of the free surface assumes that the earthquake radiates energy from a range of depths. We apply this spectral analysis to a set of 12 subduction zone earthquakes which range in size from Ms = 6.2 to 8.1, obtaining corrected P wave acceleration spectra on the frequency band from 0.01 to 2.0 Hz. Seismic moment estimates from surface waves and normal modes are used to extend these P wave spectra to the frequency band from 0.001 to 0.01 Hz. The acceleration spectra of large subduction zone earthquakes, that is, earthquakes whose seismic moments are greater than 1027 dyn cm, exhibit intermediate slopes where u(w)???w5/4 for frequencies from 0.005 to 0.05 Hz. For these earthquakes, spectral shape appears to be a discontinuous function of seismic moment. Using reasonable assumptions for the phase characteristics, we transform the spectral shape observed for large earthquakes into the time domain to fit Ekstrom's (1987) moment rate functions for the Ms=8.1 Michoacan earthquake of September 19, 1985, and the Ms=7.6 Michoacan aftershock of September 21, 1985. -from Authors

  1. Validation of simulated earthquake ground motions based on evolution of intensity and frequency content

    USGS Publications Warehouse

    Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin

    2015-01-01

    Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.

  2. Seismic Moment, Seismic Energy, and Source Duration of Slow Earthquakes: Application of Brownian slow earthquake model to three major subduction zones

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Maury, Julie

    2018-04-01

    Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.

  3. Shallow very-low-frequency earthquakes accompanied with slow slip event along the plate boundary of the Nankai trough

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Hori, T.; Araki, E.; Kodaira, S.; Ide, S.

    2017-12-01

    Recent improvements of seismic and geodetic observations have revealed the existence of a new family of slow earthquakes occurring along or close to the plate boundary worldwide. In the viewpoint of the characteristic time scales, the slow earthquakes can be classified into several groups as low-frequency tremor or tectonic tremor (LFT) dominated in several hertz, very-low-frequency earthquake (VLFE) dominated in 10 to 100 s, and short- and long-term slow-slip event (SSE) with durations of days to years. In many cases, these slow earthquakes are accompanied with other types of slow events. However, the events occurring offshore, especially beneath the toe of accretionary prism, are poorly understood because of the difficulty to detect signals. Utilizing the data captured from oceanfloor observation networks which many efforts have recently been taken to develop is necessary to improve our understandings for these events. Here, we investigated CMT analysis of shallow VLFEs using data obtained from DONET oceanfloor observation networks along the Nankai trough, southwest of Japan. We found that shallow VLFEs have almost identical history of moment release with that of synchronous SSE which occurred at the same region recently found by Araki et al. (2017). VLFE sources show updip migrations during the activity, coincident with the migration of SSE source. From these findings we conclude that these slow events share the same fault slip, and VLFE represent high-frequency fluctuations of slip during SSE. This result imply that shallow SSE along the plate interface would have occurred in the background during the shallow VLFE activities repeatedly observed along the Nankai trough, but the SSE was not reported because of difficult detections.

  4. A new 1649-1884 catalog of destructive earthquakes near Tokyo and implications for the long-term seismic process

    USGS Publications Warehouse

    Grunewald, E.D.; Stein, R.S.

    2006-01-01

    In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.

  5. An autocorrelation method to detect low frequency earthquakes within tremor

    USGS Publications Warehouse

    Brown, J.R.; Beroza, G.C.; Shelly, D.R.

    2008-01-01

    Recent studies have shown that deep tremor in the Nankai Trough under western Shikoku consists of a swarm of low frequency earthquakes (LFEs) that occur as slow shear slip on the down-dip extension of the primary seismogenic zone of the plate interface. The similarity of tremor in other locations suggests a similar mechanism, but the absence of cataloged low frequency earthquakes prevents a similar analysis. In this study, we develop a method for identifying LFEs within tremor. The method employs a matched-filter algorithm, similar to the technique used to infer that tremor in parts of Shikoku is comprised of LFEs; however, in this case we do not assume the origin times or locations of any LFEs a priori. We search for LFEs using the running autocorrelation of tremor waveforms for 6 Hi-Net stations in the vicinity of the tremor source. Time lags showing strong similarity in the autocorrelation represent either repeats, or near repeats, of LFEs within the tremor. We test the method on an hour of Hi-Net recordings of tremor and demonstrates that it extracts both known and previously unidentified LFEs. Once identified, we cross correlate waveforms to measure relative arrival times and locate the LFEs. The results are able to explain most of the tremor as a swarm of LFEs and the locations of newly identified events appear to fill a gap in the spatial distribution of known LFEs. This method should allow us to extend the analysis of Shelly et al. (2007a) to parts of the Nankai Trough in Shikoku that have sparse LFE coverage, and may also allow us to extend our analysis to other regions that experience deep tremor, but where LFEs have not yet been identified. Copyright 2008 by the American Geophysical Union.

  6. Detection of high-frequency radiation sources during the 2004 Parkfield earthquake by a matched filter analysis

    NASA Astrophysics Data System (ADS)

    Uchide, T.; Shearer, P. M.

    2009-12-01

    Introduction Uchide and Ide [SSA Spring Meeting, 2009] proposed a new framework for studying the scaling and overall nature of earthquake rupture growth in terms of cumulative moment functions. For better understanding of rupture growth processes, spatiotemporally local processes are also important. The nature of high-frequency (HF) radiation has been investigated for some time, but its role in the earthquake rupture process is still unclear. A wavelet analysis reveals that the HF radiation (e.g., 4 - 32 Hz) of the 2004 Parkfield earthquake is peaky, which implies that the sources of the HF radiation are isolated in space and time. We experiment with applying a matched filter analysis using small template events occurring near the target event rupture area to test whether it can reveal the HF radiation sources for a regular large earthquake. Method We design a matched filter for multiple components and stations. Shelly et al. [2007] attempted identifying low-frequency earthquakes (LFE) in non-volcanic tremor waveforms by stacking the correlation coefficients (CC) between the seismograms of the tremor and the LFE. Differing from their method, our event detection indicator is the CC between the seismograms of the target and template events recorded at the same stations, since the key information for detecting the sources will be the arrival-time differences and the amplitude ratios among stations. Data from both the target and template events are normalized by the maximum amplitude of the seismogram of the template event in the cross-correlation time window. This process accounts for the radiation pattern and distance between the source and stations. At each small earthquake target, high values in the CC time series suggest the possibility of HF radiation during the mainshock rupture from a similar location to the target event. Application to the 2004 Parkfield earthquake We apply the matched filter method to the 2004 Parkfield earthquake (Mw 6.0). We use seismograms

  7. Characterizing waveform uncertainty due to ambient noise for the Global Seismic Network

    NASA Astrophysics Data System (ADS)

    Guandique, J. A.; Burdick, S.; Lekic, V.

    2015-12-01

    Ambient seismic noise is the vibration present on seismograms not due by any earthquake or discrete source. It can be caused by trees swaying in the wind or trucks rumbling on the freeway, but the main source of noise is the microseism caused by ocean waves. The frequency content and amplitude of seismic noise varies due to weather, season, and the location of a station, among other factors. Because noise affects recordings of earthquake waveforms, better understanding it could improve the detection of small earthquakes, reduce false positives in earthquake early warning, and quantify uncertainty in waveform-based studies In this study, we used two years of 3-component accelerograms from stations in the GSN. We eliminate days with major earthquakes, aggregate analysis by month, and calculate the mean power spectrum for each component and the transfer function between components. For each power spectrum, we determine the dominant frequency and amplitude of the primary (PM) and secondary (SM) microseisms which appear at periods of ~14s and ~7s, as well as any other prominent peaks. The cross-component terms show that noise recorded on different components cannot be treated as independent. Trends in coherence and phase delay suggest directionality in the noise and information about in which modes it propagates. Preliminary results show that the noise on island stations exhibits less monthly variability, and its PM peaks tend to be much weaker than the SM peaks. The continental stations show much less consistent behavior, with higher variability in the PM peaks between stations and higher frequency content during winter months. Stations that are further inland have smaller SM peaks compared to coastal stations, which are more similar to island stations. Using these spectra and cross-component results, we develop a method for generating realistic 3-component seismic noise and covariance matrices, which can be used across various seismic applications.

  8. High-frequency envelope inversion analysis of the 2003 Tokachi-Oki, JAPAN, earthquake (Mw8.0)

    NASA Astrophysics Data System (ADS)

    Nakahara, H.

    2004-12-01

    The 2003 Tokachi-Oki earthquake (Mw 8.0) took place on September 26, 2003 at the plate interface between the subducting Pacific plate and the Hokkaido island, northern Japan. The focal depth is around 30km and the focal mechanism is thrust type. This earthquake caused 2 missings, more than 100 injures, 2000 collapsed houses, and so on. Slip distribution on the fault plane was already estimated by inversion analyses of low-frequency seismograms. However, source characteristics for the earthquake in frequencies higher than 1 Hz is not so far clarified. In this study, we execute an envelope inversion analysis based on the method by Nakahara et al. (1998) and clarify the spatial distribution of high-frequency seismic energy radiation on the fault plane of this earthquake. We use three-component sum of mean squared velocity seismograms multiplied by a density of earth medium, which is called envelopes here, for the envelope inversion analysis. Three frequency bands of 1-2, 2-4, and 4-8 Hz are adopted. We use envelopes in the time window from the onset of S waves to the lapse time of 128 sec. Green functions of envelopes representing the energy propagation process through a scattering medium are calculated based on the radiative transfer theory, which are characterized by parameters of scattering attenuation and intrinsic absorption. We use the values obtained for eastern Hokkaido (Hoshiba, 1993). We assume the fault plane as follows: strike=249o, dip=15o, rake=130o, length=150km, width=165km with reference to a waveform inversion analysis in low frequencies (e.g. Yagi, 2003). We divide this fault plane into 110 subfaults, each of which is a 15km x 15km square. Rupture velocity is assumed to be constant. Seismic energy is radiated from a point source as soon as the rupture front passes the center of each subfault. Time function of energy radiation is assumed as a box-car function. The amount of seismic energy from all the subfaults and site amplification factors for all

  9. Bayesian historical earthquake relocation: an example from the 1909 Taipei earthquake

    USGS Publications Warehouse

    Minson, Sarah E.; Lee, William H.K.

    2014-01-01

    Locating earthquakes from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both large phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the earthquake. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei earthquake. While our epicentre location and origin time for the 1909 Taipei earthquake are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.

  10. Earthquake impact scale

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.

    2011-01-01

    also be both specific (although allowably uncertain) and actionable. In this analysis, an attempt is made at both simple and intuitive color-coded alerting criteria; yet the necessary uncertainty measures by which one can gauge the likelihood for the alert to be over- or underestimated are preserved. The essence of the proposed impact scale and alerting is that actionable loss information is now available in the immediate aftermath of significant earthquakes worldwide on the basis of quantifiable loss estimates. Utilizing EIS, PAGER's rapid loss estimates can adequately recommend alert levels and suggest appropriate response protocols, despite the uncertainties; demanding or awaiting observations or loss estimates with a high level of accuracy may increase the losses. ?? 2011 American Society of Civil Engineers.

  11. Response of a 14-story Anchorage, Alaska, building in 2002 to two close earthquakes and two distant Denali fault earthquakes

    USGS Publications Warehouse

    Celebi, M.

    2004-01-01

    The recorded responses of an Anchorage, Alaska, building during four significant earthquakes that occurred in 2002 are studied. Two earthquakes, including the 3 November 2002 M7.9 Denali fault earthquake, with epicenters approximately 275 km from the building, generated long trains of long-period (>1 s) surface waves. The other two smaller earthquakes occurred at subcrustal depths practically beneath Anchorage and produced higher frequency motions. These two pairs of earthquakes have different impacts on the response of the building. Higher modes are more pronounced in the building response during the smaller nearby events. The building responses indicate that the close-coupling of translational and torsional modes causes a significant beating effect. It is also possible that there is some resonance occurring due to the site frequency being close to the structural frequency. Identification of dynamic characteristics and behavior of buildings can provide important lessons for future earthquake-resistant designs and retrofit of existing buildings. ?? 2004, Earthquake Engineering Research Institute.

  12. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  13. Precise hypocenter locations of midcrustal low-frequency earthquakes beneath Mt. Fuji, Japan

    USGS Publications Warehouse

    Nakamichi, H.; Ukawa, M.; Sakai, S.

    2004-01-01

    Midcrustal low-frequency earthquakes (MLFs) have been observed at seismic stations around Mt. Fuji, Japan. In September - December 2000 and April - May 2001, abnormally high numbers of MLFs occurred. We located hypocenters for the 80 MLFs during 1998-2003 by using the hypoDD earthquake location program (Waldhauser and Ellsworth, 2000). The MLF hypocenters define an ellipsoidal volume some 5 km in diameter ranging from 11 to 16 km in focal depth. This volume is centered 3 km northeast of the summit and its long axis is directed NW-SE. The direction of the axis coincides with the major axis of tectonic compression around Mt. Fuji. The center of the MLF epicenters gradually migrated upward and 2-3 km from southeast to northwest during 1998-2001. We interpret that the hypocentral migration of MLFs reflects magma movement associated with a NW-SE oriented dike beneath Mt. Fuji. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  14. Relative locations between shallow very low frequency earthquakes and low frequency tremors investigated based on near-filed BBOBS records

    NASA Astrophysics Data System (ADS)

    Chi, W. C.; To, A.; Chen, W. J.; Konishi, K.

    2017-12-01

    Two types of anomalous seismic events of long duration with signals depleted in high frequencies relative to most earthquakes are recorded in a network of broadband ocean bottom seismometers (BBOBS) deployed at shallow Nankai subduction zone (DONET1). The first type is very low frequency earthquake (VLFE) whose signals are observed both in the lower and higher frequency ranges of the 0.1 Hz microseism band, which are 0.02-0.06 Hz and 2-8 Hz. The second type is low frequency tremor (LFT), whose signals are only observed at 2-8 Hz. The waveform similarity at 2-8 Hz and concurrences of the two types of event warrant further investigations on whether they represent the same phenomenon or not. Previously, To et al., (2015) examined the relation between VLFEs and LFTs by comparing their maximum amplitude at two different frequency ranges, 2-8 Hz and 0.02-0.05 Hz. The comparison showed that the maximum amplitudes measured at the two frequency ranges correlate positively for VLFEs, that is, large magnitude VLFEs showed large amplitude in both frequency ranges. The comparison also showed that the amplitude measured at 2-8 Hz were larger for VLFEs than those of LFTs. Based on such amplitude observations, they concluded that VLFEs and LFTs are likely smaller and larger events of the same phenomenon. Here, we examined the relation between the two types of event based on their spatial distribution. Their distributions should be similar if they represent the same phenomenon. The data are broadband seismographs of 20 stations of DONET1. We detected 144 VLFEs and 775 LFTs during the intense LFT/VLFE activity period of one week in October 2015. Events are located using an envelope cross correlation method. We used the root-mean-square (RMS) amplitudes constructed from the two horizontal components, bandpass filtered at 2-­8 Hz and then smoothed by taking a moving average with a window length of 5 s. The obtained distributions of VLFEs and LFTs show similar patterns. They both

  15. The Trembling Earth Before Wenchuan Earthquake: Recognition of Precursory Anomalies through High Frequency Sampling Data of Groundwater

    NASA Astrophysics Data System (ADS)

    Huang, F.

    2017-12-01

    With a magnitude of MS8.0, the 2008 Wenchuan earthquake is classified as one of the "great earthquakes", which are potentially the most destructive, since it occurred at shallow depth close to a highly populated area without prediction, due to no confirmative precursors which were detected from a large amount of newly carried out digital observation data. Scientists who specilize in prediction routine work had been condemned and self-condemned for a long time then. After the pain of defeat passed, scientists have been some thinking to analyze the old observation data in new perspectives from longer temporal process, multiple-disciplinaries, and in different frequency. This presentation will show the preliminary results from groundwater level and temperature observed in 3 wells which distribute along the boundaries of tectonic blocks nearby and far from Wenchuan earthquake rupture.

  16. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  17. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the

  18. Observing Triggered Earthquakes Across Iran with Calibrated Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Karasozen, E.; Bergman, E.; Ghods, A.; Nissen, E.

    2016-12-01

    We investigate earthquake triggering phenomena in Iran by analyzing patterns of aftershock activity around mapped surface ruptures. Iran has an intense level of seismicity (> 40,000 events listed in the ISC Bulletin since 1960) due to it accommodating a significant portion of the continental collision between Arabia and Eurasia. There are nearly thirty mapped surface ruptures associated with earthquakes of M 6-7.5, mostly in eastern and northwestern Iran, offering a rich potential to study the kinematics of earthquake nucleation, rupture propagation, and subsequent triggering. However, catalog earthquake locations are subject to up to 50 km of location bias from the combination of unknown Earth structure and unbalanced station coverage, making it challenging to assess both the rupture directivity of larger events and the spatial patterns of their aftershocks. To overcome this limitation, we developed a new two-tiered multiple-event relocation approach to obtain hypocentral parameters that are minimally biased and have realistic uncertainties. In the first stage, locations of small clusters of well-recorded earthquakes at local spatial scales (100s of events across 100 km length scales) are calibrated either by using near-source arrival times or independent location constraints (e.g. local aftershock studies, InSAR solutions), using an implementation of the Hypocentroidal Decomposition relocation technique called MLOC. Epicentral uncertainties are typically less than 5 km. Then, these events are used as prior constraints in the code BayesLoc, a Bayesian relocation technique that can handle larger datasets, to yield region-wide calibrated hypocenters (1000s of events over 1000 km length scales). With locations and errors both calibrated, the pattern of aftershock activity can reveal the type of the earthquake triggering: dynamic stress changes promote an increase in the seismicity rate in the direction of unilateral propagation, whereas static stress changes should

  19. Borehole strain observations of very low frequency earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Ghosh, A.; Hutchinson, A. A.

    2016-12-01

    We examine the signals of very low frequency earthquakes (VLFEs) in PBO borehole strain data in central Cascadia. These MW 3.3 - 4.1 earthquakes are best observed in seismograms at periods of 20 to 50 seconds. We look for the strain they produce on timescales from about 1 to 30 minutes. First, we stack the strain produced by 13 VLFEs identified by a grid search moment tensor inversion algorithm by Ghosh et. al. (2015) and Hutchinson and Ghosh (2016), as well as several thousand VLFEs detected through template matching these events. The VLFEs are located beneath southernmost Vancouver Island and the eastern Olympic Peninsula, and are best recorded at co-located stations B005 and B007. However, even at these stations, the signal to noise in the stack is often low, and the records are difficult to interpret. Therefore we also combine data from multiple stations and VLFE locations, and simply look for increases in the strain rate at the VLFE times, as increases in strain rate would suggest an increase in the moment rate. We compare the background strain rate in the 12 hours centered on the VLFEs with the strain rate in the 10 minutes centered on the VLFEs. The 10-minute duration is chosen as a compromise that averages out some instrumental noise without introducing too much longer-period random walk noise. Our results suggest a factor of 2 increase in strain rate--and thus moment rate--during the 10-minute VLFE intervals. The increase gives an average VLFE magnitude around M 3.5, within the range of magnitudes obtained with seismology. Further analyses are currently being carried out to better understand the evolution of moment release before, during, and after the VLFEs.

  20. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    NASA Astrophysics Data System (ADS)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a

  1. Absolute frequency list of the ν3-band transitions of methane at a relative uncertainty level of 10(-11).

    PubMed

    Okubo, Sho; Nakayama, Hirotaka; Iwakuni, Kana; Inaba, Hajime; Sasada, Hiroyuki

    2011-11-21

    We determine the absolute frequencies of 56 rotation-vibration transitions of the ν(3) band of CH(4) from 88.2 to 90.5 THz with a typical uncertainty of 2 kHz corresponding to a relative uncertainty of 2.2 × 10(-11) over an average time of a few hundred seconds. Saturated absorption lines are observed using a difference-frequency-generation source and a cavity-enhanced absorption cell, and the transition frequencies are measured with a fiber-laser-based optical frequency comb referenced to a rubidium atomic clock linked to the international atomic time. The determined value of the P(7) F(2)((2)) line is consistent with the International Committee for Weights and Measures recommendation within the uncertainty. © 2011 Optical Society of America

  2. Earthquakes on Your Dinner Table

    NASA Astrophysics Data System (ADS)

    Alexeev, N. A.; Tape, C.; Alexeev, V. A.

    2016-12-01

    Earthquakes have interesting physics applicable to other phenomena like propagation of waves, also, they affect human lives. This study focused on three questions, how: depth, distance from epicenter and ground hardness affect earthquake strength. Experimental setup consisted of a gelatin slab to simulate crust. The slab was hit with a weight and earthquake amplitude was measured. It was found that earthquake amplitude was larger when the epicenter was deeper, which contradicts observations and probably was an artifact of the design. Earthquake strength was inversely proportional to the distance from the epicenter, which generally follows reality. Soft and medium jello were implanted into hard jello. It was found that earthquakes are stronger in softer jello, which was a result of resonant amplification in soft ground. Similar results are found in Minto Flats, where earthquakes are stronger and last longer than in the nearby hills. Earthquakes waveforms from Minto Flats showed that that the oscillations there have longer periods compared to the nearby hills with harder soil. Two gelatin pieces with identical shapes and different hardness were vibrated on a platform at varying frequencies in order to demonstrate that their resonant frequencies are statistically different. This phenomenon also occurs in Yukon Flats.

  3. Quantification of Uncertainty in the Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.

    2017-12-01

    Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.

  4. Propagation of the velocity model uncertainties to the seismic event location

    NASA Astrophysics Data System (ADS)

    Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.

    2015-01-01

    Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.

  5. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  6. Absolute frequency measurement of the ? optical clock transition in ? with an uncertainty of ? using a frequency link to international atomic time

    NASA Astrophysics Data System (ADS)

    Baynham, Charles F. A.; Godun, Rachel M.; Jones, Jonathan M.; King, Steven A.; Nisbet-Jones, Peter B. R.; Baynes, Fred; Rolland, Antoine; Baird, Patrick E. G.; Bongs, Kai; Gill, Patrick; Margolis, Helen S.

    2018-03-01

    The highly forbidden ? electric octupole transition in ? is a potential candidate for a redefinition of the SI second. We present a measurement of the absolute frequency of this optical transition, performed using a frequency link to International Atomic Time to provide traceability to the SI second. The ? optical frequency standard was operated for 76% of a 25-day period, with the absolute frequency measured to be 642 121 496 772 645.14(26) Hz. The fractional uncertainty of ? is comparable to that of the best previously reported measurement, which was made by a direct comparison to local caesium primary frequency standards.

  7. Fortnightly modulation of San Andreas tremor and low-frequency earthquakes

    DOE PAGES

    van der Elst, Nicholas J.; Delorey, Andrew A.; Shelly, David R.; ...

    2016-07-18

    Earth tides modulate tremor and low-frequency earthquakes (LFEs) on faults in the vicinity of the brittle-ductile (seismic-aseismic) transition. Our response to the tidal stress carries otherwise inaccessible information about fault strength and rheology. We analyze the LFE response to the fortnightly tide, which modulates the amplitude of the daily tidal stress over a 14-d cycle. LFE rate is highest during the waxing fortnightly tide, with LFEs most strongly promoted when the daily stress exceeds the previous peak stress by the widest margin. This pattern implies a threshold failure process, with slip initiated when stress exceeds the local fault strength. Furthermore,more » variations in sensitivity to the fortnightly modulation may reflect the degree of stress concentration on LFE-producing brittle asperities embedded within an otherwise aseismic fault.« less

  8. Fortnightly modulation of San Andreas tremor and low-frequency earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van der Elst, Nicholas J.; Delorey, Andrew A.; Shelly, David R.

    Earth tides modulate tremor and low-frequency earthquakes (LFEs) on faults in the vicinity of the brittle-ductile (seismic-aseismic) transition. Our response to the tidal stress carries otherwise inaccessible information about fault strength and rheology. We analyze the LFE response to the fortnightly tide, which modulates the amplitude of the daily tidal stress over a 14-d cycle. LFE rate is highest during the waxing fortnightly tide, with LFEs most strongly promoted when the daily stress exceeds the previous peak stress by the widest margin. This pattern implies a threshold failure process, with slip initiated when stress exceeds the local fault strength. Furthermore,more » variations in sensitivity to the fortnightly modulation may reflect the degree of stress concentration on LFE-producing brittle asperities embedded within an otherwise aseismic fault.« less

  9. Fortnightly modulation of San Andreas tremor and low-frequency earthquakes

    USGS Publications Warehouse

    Van Der Elst, Nicholas; Delorey, Andrew; Shelly, David R.; Johnson, Paul

    2016-01-01

    Earth tides modulate tremor and low-frequency earthquakes (LFEs) on faults in the vicinity of the brittle−ductile (seismic−aseismic) transition. The response to the tidal stress carries otherwise inaccessible information about fault strength and rheology. Here, we analyze the LFE response to the fortnightly tide, which modulates the amplitude of the daily tidal stress over a 14-d cycle. LFE rate is highest during the waxing fortnightly tide, with LFEs most strongly promoted when the daily stress exceeds the previous peak stress by the widest margin. This pattern implies a threshold failure process, with slip initiated when stress exceeds the local fault strength. Variations in sensitivity to the fortnightly modulation may reflect the degree of stress concentration on LFE-producing brittle asperities embedded within an otherwise aseismic fault.

  10. The evolving interaction of low-frequency earthquakes during transient slip.

    PubMed

    Frank, William B; Shapiro, Nikolaï M; Husker, Allen L; Kostoglodov, Vladimir; Gusev, Alexander A; Campillo, Michel

    2016-04-01

    Observed along the roots of seismogenic faults where the locked interface transitions to a stably sliding one, low-frequency earthquakes (LFEs) primarily occur as event bursts during slow slip. Using an event catalog from Guerrero, Mexico, we employ a statistical analysis to consider the sequence of LFEs at a single asperity as a point process, and deduce the level of time clustering from the shape of its autocorrelation function. We show that while the plate interface remains locked, LFEs behave as a simple Poisson process, whereas they become strongly clustered in time during even the smallest slow slip, consistent with interaction between different LFE sources. Our results demonstrate that bursts of LFEs can result from the collective behavior of asperities whose interaction depends on the state of the fault interface.

  11. Stigma in science: the case of earthquake prediction.

    PubMed

    Joffe, Helene; Rossetto, Tiziana; Bradley, Caroline; O'Connor, Cliodhna

    2018-01-01

    This paper explores how earthquake scientists conceptualise earthquake prediction, particularly given the conviction of six earthquake scientists for manslaughter (subsequently overturned) on 22 October 2012 for having given inappropriate advice to the public prior to the L'Aquila earthquake of 6 April 2009. In the first study of its kind, semi-structured interviews were conducted with 17 earthquake scientists and the transcribed interviews were analysed thematically. The scientists primarily denigrated earthquake prediction, showing strong emotive responses and distancing themselves from earthquake 'prediction' in favour of 'forecasting'. Earthquake prediction was regarded as impossible and harmful. The stigmatisation of the subject is discussed in the light of research on boundary work and stigma in science. The evaluation reveals how mitigation becomes the more favoured endeavour, creating a normative environment that disadvantages those who continue to pursue earthquake prediction research. Recommendations are made for communication with the public on earthquake risk, with a focus on how scientists portray uncertainty. © 2018 The Author(s). Disasters © Overseas Development Institute, 2018.

  12. Dense Array Studies of Volcano-Tectonic and Long-Period Earthquakes Beneath Mount St. Helens

    NASA Astrophysics Data System (ADS)

    Glasgow, M. E.; Hansen, S. M.; Schmandt, B.; Thomas, A.

    2017-12-01

    A 904 single-component 10-Hz geophone array deployed within 15 km of Mount St. Helens (MSH) in 2014 recorded continuously for two-weeks. Automated reverse-time imaging (RTI) was used to generate a catalog of 212 earthquakes. Among these, two distinct types of upper crustal (<8 km) earthquakes were classified. Volcano-tectonic (VT) and long-period (LP) earthquakes were identified using analysis of array spectrograms, envelope functions, and velocity waveforms. To remove analyst subjectivity, quantitative classification criteria were developed based on the ratio of power in high and low frequency bands and coda duration. Prior to the 2014 experiment, upper crustal LP earthquakes had only been reported at MSH during volcanic activity. Subarray beamforming was used to distinguish between LP earthquakes and surface generated LP signals, such as rockfall. This method confirmed 16 LP signals with horizontal velocities exceeding that of upper crustal P-wave velocities, which requires a subsurface hypocenter. LP and VT locations overlap in a cluster slightly east of the summit crater from 0-5 km below sea level. LP displacement spectra are similar to simple theoretical predictions for shear failure except that they have lower corner frequencies than VT earthquakes of similar magnitude. The results indicate a distinct non-resonant source for LP earthquakes which are located in the same source volume as some VT earthquakes (within hypocenter uncertainty of 1 km or less). To further investigate MSH microseismicity mechanisms, a 142 three-component (3-C) 5 Hz geophone array will record continuously for one month at MSH in Fall 2017 providing a unique dataset for a volcano earthquake source study. This array will help determine if LP occurrence in 2014 was transient or if it is still ongoing. Unlike the 2014 array, approximately 50 geophones will be deployed in the MSH summit crater directly over the majority of seismicity. RTI will be used to detect and locate earthquakes by

  13. Studies of earthquakes and microearthquakes using near-field seismic and geodetic observations

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas Bartholomew

    The Centroid-Moment Tensor (CMT) method allows an optimal point-source description of an earthquake to be recovered from a set of seismic observations, and, for over 30 years, has been routinely applied to determine the location and source mechanism of teleseismically recorded earthquakes. The CMT approach is, however, entirely general: any measurements of seismic displacement fields could, in theory, be used within the CMT inversion formulation, so long as the treatment of the earthquake as a point source is valid for that data. We modify the CMT algorithm to enable a variety of near-field seismic observables to be inverted for the source parameters of an earthquake. The first two data types that we implement are provided by Global Positioning System receivers operating at sampling frequencies of 1,Hz and above. When deployed in the seismic near field, these instruments may be used as long-period-strong-motion seismometers, recording displacement time series that include the static offset. We show that both the displacement waveforms, and static displacements alone, can be used to obtain CMT solutions for moderate-magnitude earthquakes, and that performing analyses using these data may be useful for earthquake early warning. We also investigate using waveform recordings - made by conventional seismometers deployed at the surface, or by geophone arrays placed in boreholes - to determine CMT solutions, and their uncertainties, for microearthquakes induced by hydraulic fracturing. A similar waveform inversion approach could be applied in many other settings where induced seismicity and microseismicity occurs..

  14. Synthesis of instrumentally and historically recorded earthquakes and studying their spatial statistical relationship (A case study: Dasht-e-Biaz, Eastern Iran)

    NASA Astrophysics Data System (ADS)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-06-01

    Earthquake catalogues are the main source of statistical seismology for the long term studies of earthquake occurrence. Therefore, studying the spatiotemporal problems is important to reduce the related uncertainties in statistical seismology studies. A statistical tool, time normalization method, has been determined to revise time-frequency relationship in one of the most active regions of Asia, Eastern Iran and West of Afghanistan, (a and b were calculated around 8.84 and 1.99 in the exponential scale, not logarithmic scale). Geostatistical simulation method has been further utilized to reduce the uncertainties in the spatial domain. A geostatistical simulation produces a representative, synthetic catalogue with 5361 events to reduce spatial uncertainties. The synthetic database is classified using a Geographical Information System, GIS, based on simulated magnitudes to reveal the underlying seismicity patterns. Although some regions with highly seismicity correspond to known faults, significantly, as far as seismic patterns are concerned, the new method highlights possible locations of interest that have not been previously identified. It also reveals some previously unrecognized lineation and clusters in likely future strain release.

  15. Lower crustal earthquakes in the North China Basin and implications for crustal rheology

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; Dong, Y.; Ni, S.; LI, Z.

    2017-12-01

    The North China Basin is a Mesozoic-Cenozoic continental rift basin on the eastern North China Craton. It is the central region of craton destruction, also a very seismically active area suffering severely from devastating earthquakes, such as the 1966 Xingtai M7.2 earthquake, the 1967 Hejian M6.3 earthquake, and the 1976 Tangshan M7.8 earthquake. We found remarkable discrepancies of depth distribution among the three earthquakes, for instance, the Xingtai and Tangshan earthquakes are both upper-crustal earthquakes occurring between 9 and 15 km on depth, but the depth of the Hejian earthquake was reported of about 30 72 km, ranging from lowermost crust to upper mantle. In order to investigate the focal depth of earthquakes near Hejian area, we developed a method to resolve focal depth for local earthquakes occurring beneath sedimentary regions by P and S converted waves. With this method, we obtained well-resolved depths of 44 local events with magnitudes between M1.0 and M3.0 during 2008 to 2016 at the Hejian seismic zone, with a mean depth uncertainty of about 2 km. The depth distribution shows abundant earthquakes at depth of 20 km, with some events in the lower crust, but absence of seismicity deeper than 25 km. In particular, we aimed at deducing some constraints on the local crustal rheology from depth-frequency distribution. Therefore, we performed a comparison between the depth-frequency distribution and the crustal strength envelop, and found a good fit between the depth profile in the Hejian seismic zone and the yield strength envelop in the Baikal Rift Systems. As a conclusion, we infer that the seismogenic thickness is 25 km and the main deformation mechanism is brittle fracture in the North China Basin . And we made two hypotheses: (1) the rheological layering of dominant rheology in the North China Basin is similar to that of the Baikal Rift Systems, which can be explained with a quartz rheology at 0 10 km depth and a diabase rheology at 10 35 km

  16. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    NASA Astrophysics Data System (ADS)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  17. Directly Estimating Earthquake Rupture Area using Second Moments to Reduce the Uncertainty in Stress Drop

    NASA Astrophysics Data System (ADS)

    McGuire, Jeffrey J.; Kaneko, Yoshihiro

    2018-06-01

    The key kinematic earthquake source parameters: rupture velocity, duration and area, shed light on earthquake dynamics, provide direct constraints on stress-drop, and have implications for seismic hazard. However, for moderate and small earthquakes, these parameters are usually poorly constrained due to limitations of the standard analysis methods. Numerical experiments by Kaneko and Shearer [2014,2015] demonstrated that standard spectral fitting techniques can lead to roughly 1 order of magnitude variation in stress-drop estimates that do not reflect the actual rupture properties even for simple crack models. We utilize these models to explore an alternative approach where we estimate the rupture area directly. For the suite of models, the area averaged static stress drop is nearly constant for models with the same underlying friction law, yet corner frequency based stress-drop estimates vary by a factor of 5-10 even for noise free data. Alternatively, we simulated inversions for the rupture area as parameterized by the second moments of the slip distribution. A natural estimate for the rupture area derived from the second moments is A=πLcWc, where Lc and Wc are the characteristic rupture length and width. This definition yields estimates of stress drop that vary by only 10% between the models but are slightly larger than the true area-averaged values. We simulate inversions for the second moments for the various models and find that the area can be estimated well when there are at least 15 available measurements of apparent duration at a variety of take-off angles. The improvement compared to azimuthally-averaged corner-frequency based approaches results from the second moments accounting for directivity and removing the assumption of a circular rupture area, both of which bias the standard approach. We also develop a new method that determines the minimum and maximum values of rupture area that are consistent with a particular dataset at the 95% confidence

  18. Complex rupture of the 13 November 2016 Mw 7.8 Kaikoura, New Zealand earthquake: Comparison of high-frequency and low-frequency observations

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Chen, Yunguo; Wang, Qi; Mori, Jim

    2018-05-01

    We apply a back-projection analysis to determine the locations and timing of the sources of short-period (0.5 to 2 s) energy generated by the 13 November 2016 Mw 7.8 Kaikoura, New Zealand earthquake using data from Australian and Southeast Asia. The sources of strong short-period energy are distributed northeast of the epicenter at distances of 70 to 80 km during the time period of 70 to 80 s after the initiation. The locations of sources of long-period energy derived from global seismic and local GPS data are close to the northeastern edge of the source area, and complementary to the areas of short-period energy which occur in the converging region of the Upper Kowhal, Papatea, and Jordan Thrust faults. The obvious frequency dependence might be attributed to complexities in fault geometry, possible rupture in the subduction interface, or varying focal mechanisms during the earthquake.

  19. Sparse Representation Based Frequency Detection and Uncertainty Reduction in Blade Tip Timing Measurement for Multi-Mode Blade Vibration Monitoring

    PubMed Central

    Pan, Minghao; Yang, Yongmin; Guan, Fengjiao; Hu, Haifeng; Xu, Hailong

    2017-01-01

    The accurate monitoring of blade vibration under operating conditions is essential in turbo-machinery testing. Blade tip timing (BTT) is a promising non-contact technique for the measurement of blade vibrations. However, the BTT sampling data are inherently under-sampled and contaminated with several measurement uncertainties. How to recover frequency spectra of blade vibrations though processing these under-sampled biased signals is a bottleneck problem. A novel method of BTT signal processing for alleviating measurement uncertainties in recovery of multi-mode blade vibration frequency spectrum is proposed in this paper. The method can be divided into four phases. First, a single measurement vector model is built by exploiting that the blade vibration signals are sparse in frequency spectra. Secondly, the uniqueness of the nonnegative sparse solution is studied to achieve the vibration frequency spectrum. Thirdly, typical sources of BTT measurement uncertainties are quantitatively analyzed. Finally, an improved vibration frequency spectra recovery method is proposed to get a guaranteed level of sparse solution when measurement results are biased. Simulations and experiments are performed to prove the feasibility of the proposed method. The most outstanding advantage is that this method can prevent the recovered multi-mode vibration spectra from being affected by BTT measurement uncertainties without increasing the probe number. PMID:28758952

  20. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  1. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  2. Seismic hazard along a crude oil pipeline in the event of an 1811-1812 type New Madrid earthquake. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, H.H.M.; Chen, C.H.S.

    1990-04-16

    An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less

  3. Accounting for genotype uncertainty in the estimation of allele frequencies in autopolyploids.

    PubMed

    Blischak, Paul D; Kubatko, Laura S; Wolfe, Andrea D

    2016-05-01

    Despite the increasing opportunity to collect large-scale data sets for population genomic analyses, the use of high-throughput sequencing to study populations of polyploids has seen little application. This is due in large part to problems associated with determining allele copy number in the genotypes of polyploid individuals (allelic dosage uncertainty-ADU), which complicates the calculation of important quantities such as allele frequencies. Here, we describe a statistical model to estimate biallelic SNP frequencies in a population of autopolyploids using high-throughput sequencing data in the form of read counts. We bridge the gap from data collection (using restriction enzyme based techniques [e.g. GBS, RADseq]) to allele frequency estimation in a unified inferential framework using a hierarchical Bayesian model to sum over genotype uncertainty. Simulated data sets were generated under various conditions for tetraploid, hexaploid and octoploid populations to evaluate the model's performance and to help guide the collection of empirical data. We also provide an implementation of our model in the R package polyfreqs and demonstrate its use with two example analyses that investigate (i) levels of expected and observed heterozygosity and (ii) model adequacy. Our simulations show that the number of individuals sampled from a population has a greater impact on estimation error than sequencing coverage. The example analyses also show that our model and software can be used to make inferences beyond the estimation of allele frequencies for autopolyploids by providing assessments of model adequacy and estimates of heterozygosity. © 2015 John Wiley & Sons Ltd.

  4. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged

  5. Activated Very Low Frequency Earthquakes By the Slow Slip Events in the Ryukyu Subduction Zone

    NASA Astrophysics Data System (ADS)

    Nakamura, M.; Sunagawa, N.

    2014-12-01

    The Ryukyu Trench (RT), where the Philippine Sea plate is subducting, has had no known thrust earthquakes with a Mw>8.0 in the last 300 years. However, the rupture source of the 1771 tsunami has been proposed as an Mw > 8.0 earthquake in the south RT. Based on the dating of tsunami boulders, it has been estimated that large tsunamis occur at intervals of 150-400 years in the south Ryukyu arc (RA) (Araoka et al., 2013), although they have not occurred for several thousand years in the central and northern Ryukyu areas (Goto et al., 2014). To address the discrepancy between recent low moment releases by earthquakes and occurrence of paleo-tsunamis in the RT, we focus on the long-term activity of the very low frequency earthquakes (VLFEs), which are good indicators of the stress release in the shallow plate interface. VLFEs have been detected along the RT (Ando et al., 2012), which occur on the plate interface or at the accretionary prism. We used broadband data from the F-net of NIED along the RT and from the IRIS network. We applied two filters to all the raw broadband seismograms: a 0.02-0.05 Hz band-pass filter and a 1 Hz high-pass filter. After identification of the low-frequency events from the band-pass-filtered seismograms, the local and teleseismic events were removed. Then we picked the arrival time of the maximum amplitude of the surface wave of the VLFEs and determined the epicenters. VLFEs occurred on the RA side within 100 km from the trench axis along the RT. Distribution of the 6670 VLFEs from 2002 to 2013 could be divided to several clusters. Principal large clusters were located at 27.1°-29.0°N, 25.5°-26.6°N, and 122.1°-122.4°E (YA). We found that the VLFEs of the YA are modulated by repeating slow slip events (SSEs) which occur beneath south RA. The activity of the VLFEs increased to two times of its ordinary rate in 15 days after the onset of the SSEs. Activation of the VLFEs could be generated by low stress change of 0.02-20 kPa increase in

  6. The key role of eyewitnesses in rapid earthquake impact assessment

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Steed, Robert; Mazet-Roux, Gilles; Roussel, Frédéric; Etivant, Caroline

    2014-05-01

    Uncertainties in rapid earthquake impact models are intrinsically large even when excluding potential indirect losses (fires, landslides, tsunami…). The reason is that they are based on several factors which are themselves difficult to constrain, such as the geographical distribution of shaking intensity, building type inventory and vulnerability functions. The difficulties can be illustrated by two boundary cases. For moderate (around M6) earthquakes, the size of potential damage zone and the epicentral location uncertainty share comparable dimension of about 10-15km. When such an earthquake strikes close to an urban area, like in 1999, in Athens (M5.9), earthquake location uncertainties alone can lead to dramatically different impact scenario. Furthermore, for moderate magnitude, the overall impact is often controlled by individual accidents, like in 2002 in Molise, Italy (M5.7), in Bingol, Turkey (M6.4) in 2003 or in Christchurch, New Zealand (M6.3) where respectively 23 out of 30, 84 out of 176 and 115 out of 185 of the causalities perished in a single building failure. Contrastingly, for major earthquakes (M>7), the point source approximation is not valid anymore, and impact assessment requires knowing exactly where the seismic rupture took place, whether it was unilateral, bilateral etc.… and this information is not readily available directly after the earthquake's occurrence. In-situ observations of actual impact provided by eyewitnesses can dramatically reduce impact models uncertainties. We will present the overall strategy developed at the EMSC which comprises of crowdsourcing and flashsourcing techniques, the development of citizen operated seismic networks, and the use of social networks to engage with eyewitnesses within minutes of an earthquake occurrence. For instance, testimonies are collected through online questionnaires available in 32 languages and automatically processed in maps of effects. Geo-located pictures are collected and then

  7. Evaluation of pollutant loads from stormwater BMPs to receiving water using load frequency curves with uncertainty analysis.

    PubMed

    Park, Daeryong; Roesner, Larry A

    2012-12-15

    This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation method based on uncertainty analysis is suggested because it applies a water quality standard exceedance based on frequency and magnitude. As a result, the incorporation of uncertainty in the estimates of pollutant loads can assist stormwater managers in determining the degree of total daily maximum load (TMDL) compliance that could be expected from a given BMP in a watershed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  9. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in

  10. Finite Source Inversion for Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Parker, J. M.; Glaser, S. D.

    2017-12-01

    We produce finite source inversion results for laboratory earthquakes (LEQ) in PMMA confirmed by video recording of the fault contact. The LEQs are generated under highly controlled laboratory conditions and recorded by an array of absolutely calibrated acoustic emissions (AE) sensors. Following the method of Hartzell and Heaton (1983), we develop a solution using only the single-component AE sensors common in laboratory experiments. A set of calibration tests using glass capillary sources of varying size resolves the material characteristics and synthetic Green's Functions such that uncertainty in source location is reduced to 3σ<1mm; typical source radii are 1mm. Well-isolated events with corner frequencies on the order of 0.1 MHz (Mw -6) are recorded at 20 MHz and initially band-pass filtered from 0.1 to 1.0 MHz; in comparison, large earthquakes with corner frequencies around 0.1 Hz are commonly filtered from 0.1 to 1.0 Hz. We compare results of the inversion and video recording to slip distribution predicted by the Cattaneo partial slip asperity and numerical modeling. Not all asperities are large enough to resolve individually so some results must be interpreted as the smoothed effects of clusters of tiny contacts. For large asperities, partial slip is observed originating at the asperity edges and moving inward as predicted by the theory. Furthermore, expanding shear rupture fronts are observed as they reach resistive patches of asperities and halt or continue, depending on the relative energies of rupture and resistance.

  11. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  12. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  13. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  14. The Key Role of Eyewitnesses in Rapid Impact Assessment of Global Earthquake

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.; Etivant, C.; Frobert, L.; Godey, S.

    2014-12-01

    Uncertainties in rapid impact assessments of global earthquakes are intrinsically large because they rely on 3 main elements (ground motion prediction models, building stock inventory and related vulnerability) which values and/or spatial variations are poorly constrained. Furthermore, variations of hypocentral location and magnitude within their respective uncertainty domain can lead to significantly different shaking level for centers of population and change the scope of the disaster. We present the strategy and methods implemented at the Euro-Med Seismological Centre (EMSC) to rapidly collect in-situ observations on earthquake effects from eyewitnesses for reducing uncertainties of rapid earthquake impact assessment. It comprises crowdsourced information (online questionnaires, pics) as well as information derived from real time analysis of web traffic (flashourcing technique), and more recently deployment of QCN (Quake Catcher Network) low cost sensors. We underline the importance of merging results of different methods to improve performances and reliability of collected data.We try to better understand and respond to public demands and expectations after earthquakes through improved information services and diversification of information tools (social networks, smartphone app., browsers adds-on…), which, in turn, drive more eyewitnesses to our services and improve data collection. We will notably present our LastQuake Twitter feed (Quakebot) and smartphone applications (IOs and android) which only report earthquakes that matter for the public and authorities, i.e. felt and damaging earthquakes identified thanks to citizen generated information.

  15. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  16. Earthquake source properties of a shallow induced seismic sequence in SE Brazil

    NASA Astrophysics Data System (ADS)

    Agurto-Detzel, Hans; Bianchi, Marcelo; Prieto, Germán. A.; Assumpção, Marcelo

    2017-04-01

    We study source parameters of a cluster of 21 very shallow (<1 km depth) small-magnitude (Mw < 2) earthquakes induced by percolation of water by gravity in SE Brazil. Using a multiple empirical Green's functions (meGf) approach, we estimate seismic moments, corner frequencies, and static stress drops of these events by inversion of their spectral ratios. For the studied magnitude range (-0.3 < Mw < 1.9), we found an increase of stress drop with seismic moment. We assess associated uncertainties by considering different signal time windows and by performing a jackknife resampling of the spectral ratios. We also calculate seismic moments by full waveform inversion to independently validate our moments from spectral analysis. We propose repeated rupture on a fault patch at shallow depth, following continuous inflow of water, as the cause for the observed low absolute stress drop values (<1 MPa) and earthquake size dependency. To our knowledge, no other study on earthquake source properties of shallow events induced by water injection with no added pressure is available in the literature. Our study suggests that source parameter characterization may provide additional information of induced seismicity by hydraulic stimulation.

  17. Repeating Earthquakes Following an Mw 4.4 Earthquake Near Luther, Oklahoma

    NASA Astrophysics Data System (ADS)

    Clements, T.; Keranen, K. M.; Savage, H. M.

    2015-12-01

    An Mw 4.4 earthquake on April 16, 2013 near Luther, OK was one of the earliest M4+ earthquakes in central Oklahoma, following the Prague sequence in 2011. A network of four local broadband seismometers deployed within a day of the Mw 4.4 event, along with six Oklahoma netquake stations, recorded more than 500 aftershocks in the two weeks following the Luther earthquake. Here we use HypoDD (Waldhauser & Ellsworth, 2000) and waveform cross-correlation to obtain precise aftershock locations. The location uncertainty, calculated using the SVD method in HypoDD, is ~15 m horizontally and ~ 35 m vertically. The earthquakes define a near vertical, NE-SW striking fault plane. Events occur at depths from 2 km to 3.5 km within the granitic basement, with a small fraction of events shallower, near the sediment-basement interface. Earthquakes occur within a zone of ~200 meters thickness on either side of the best-fitting fault surface. We use an equivalency class algorithm to identity clusters of repeating events, defined as event pairs with median three-component correlation > 0.97 across common stations (Aster & Scott, 1993). Repeating events occur as doublets of only two events in over 50% of cases; overall, 41% of earthquakes recorded occur as repeating events. The recurrence intervals for the repeating events range from minutes to days, with common recurrence intervals of less than two minutes. While clusters occur in tight dimensions, commonly of 80 m x 200 m, aftershocks occur in 3 distinct ~2km x 2km-sized patches along the fault. Our analysis suggests that with rapidly deployed local arrays, the plethora of ~Mw 4 earthquakes occurring in Oklahoma and Southern Kansas can be used to investigate the earthquake rupture process and the role of damage zones.

  18. Estimating the Maximum Magnitude of Induced Earthquakes With Dynamic Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Gilmour, E.; Daub, E. G.

    2017-12-01

    Seismicity in Oklahoma has been sharply increasing as the result of wastewater injection. The earthquakes, thought to be induced from changes in pore pressure due to fluid injection, nucleate along existing faults. Induced earthquakes currently dominate central and eastern United States seismicity (Keranen et al. 2016). Induced earthquakes have only been occurring in the central US for a short time; therefore, too few induced earthquakes have been observed in this region to know their maximum magnitude. The lack of knowledge regarding the maximum magnitude of induced earthquakes means that large uncertainties exist in the seismic hazard for the central United States. While induced earthquakes follow the Gutenberg-Richter relation (van der Elst et al. 2016), it is unclear if there are limits to their magnitudes. An estimate of the maximum magnitude of the induced earthquakes is crucial for understanding their impact on seismic hazard. While other estimates of the maximum magnitude exist, those estimates are observational or statistical, and cannot take into account the possibility of larger events that have not yet been observed. Here, we take a physical approach to studying the maximum magnitude based on dynamic ruptures simulations. We run a suite of two-dimensional ruptures simulations to physically determine how ruptures propagate. The simulations use the known parameters of principle stress orientation and rupture locations. We vary the other unknown parameters of the ruptures simulations to obtain a large number of rupture simulation results reflecting different possible sets of parameters, and use these results to train a neural network to complete the ruptures simulations. Then using a Markov Chain Monte Carlo method to check different combinations of parameters, the trained neural network is used to create synthetic magnitude-frequency distributions to compare to the real earthquake catalog. This method allows us to find sets of parameters that are

  19. Toward standardization of slow earthquake catalog -Development of database website-

    NASA Astrophysics Data System (ADS)

    Kano, M.; Aso, N.; Annoura, S.; Arai, R.; Ito, Y.; Kamaya, N.; Maury, J.; Nakamura, M.; Nishimura, T.; Obana, K.; Sugioka, H.; Takagi, R.; Takahashi, T.; Takeo, A.; Yamashita, Y.; Matsuzawa, T.; Ide, S.; Obara, K.

    2017-12-01

    Slow earthquakes have now been widely discovered in the world based on the recent development of geodetic and seismic observations. Many researchers detect a wide frequency range of slow earthquakes including low frequency tremors, low frequency earthquakes, very low frequency earthquakes and slow slip events by using various methods. Catalogs of the detected slow earthquakes are open to us in different formats by each referring paper or through a website (e.g., Wech 2010; Idehara et al. 2014). However, we need to download catalogs from different sources, to deal with unformatted catalogs and to understand the characteristics of different catalogs, which may be somewhat complex especially for those who are not familiar with slow earthquakes. In order to standardize slow earthquake catalogs and to make such a complicated work easier, Scientific Research on Innovative Areas "Science of Slow Earthquakes" has been developing a slow earthquake catalog website. In the website, we can plot locations of various slow earthquakes via the Google Maps by compiling a variety of slow earthquake catalogs including slow slip events. This enables us to clearly visualize spatial relations among slow earthquakes at a glance and to compare the regional activities of slow earthquakes or the locations of different catalogs. In addition, we can download catalogs in the unified format and refer the information on each catalog on the single website. Such standardization will make it more convenient for users to utilize the previous achievements and to promote research on slow earthquakes, which eventually leads to collaborations with researchers in various fields and further understanding of the mechanisms, environmental conditions, and underlying physics of slow earthquakes. Furthermore, we expect that the website has a leading role in the international standardization of slow earthquake catalogs. We report the overview of the website and the progress of construction. Acknowledgment: This

  20. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one

  1. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  2. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  3. Uncertainty As a Trigger for a Paradigm Change in Science Communication

    NASA Astrophysics Data System (ADS)

    Schneider, S.

    2014-12-01

    Over the last decade, the need to communicate uncertainty increased. Climate sciences and environmental sciences have faced massive propaganda campaigns by global industry and astroturf organizations. These organizations use the deep societal mistrust in uncertainty to point out alleged unethical and intentional delusion of decision makers and the public by scientists and their consultatory function. Scientists, who openly communicate uncertainty of climate model calculations, earthquake occurrence frequencies, or possible side effects of genetic manipulated semen have to face massive campaigns against their research, and sometimes against their person and live as well. Hence, new strategies to communicate uncertainty have to face the societal roots of the misunderstanding of the concept of uncertainty itself. Evolutionary biology has shown, that human mind is well suited for practical decision making by its sensory structures. Therefore, many of the irrational concepts about uncertainty are mitigated if data is presented in formats the brain is adapted to understand. At the end, the impact of uncertainty to the decision-making process is finally dominantly driven by preconceptions about terms such as uncertainty, vagueness or probabilities. Parallel to the increasing role of scientific uncertainty in strategic communication, science communicators for example at the Research and Development Program GEOTECHNOLOGIEN developed a number of techniques to master the challenge of putting uncertainty in the focus. By raising the awareness of scientific uncertainty as a driving force for scientific development and evolution, the public perspective on uncertainty is changing. While first steps to implement this process are under way, the value of uncertainty still is underestimated in the public and in politics. Therefore, science communicators are in need for new and innovative ways to talk about scientific uncertainty.

  4. Waveform classification of volcanic low-frequency earthquake swarms and its implication at Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Green, David N.; Neuberg, Jürgen

    2006-05-01

    Low-frequency volcanic earthquakes are indicators of magma transport and activity within shallow conduit systems. At a number of volcanoes, these events exhibit a high degree of waveform similarity providing a criterion for classification. Using cross-correlation techniques to quantify the degree of similarity, we develop a method to sort events into families containing comparable waveforms. Events within a family have been triggered within one small source volume from which the seismic wave has then travelled along an identical path to the receiver. This method was applied to a series of 16 low-frequency earthquake swarms, well correlated with cyclic deformation recorded by tiltmeters, at Soufrière Hills Volcano, Montserrat, in June 1997. Nine waveform groups were identified containing more than 45 events each. The families are repeated across swarms with only small changes in waveform, indicating that the seismic source location is stable with time. The low-frequency seismic swarms begin prior to the point at which inflation starts to decelerate, suggesting that the seismicity indicates or even initiates a depressurisation process. A major dome collapse occurred within the time window considered, removing the top 100 m of the dome. This event caused activity within some families to pause for several cycles before reappearing. This shows that the collapse did not permanently disrupt the source mechanism or the path of the seismic waves.

  5. How complete is the ISC-GEM Global Earthquake Catalog?

    USGS Publications Warehouse

    Michael, Andrew J.

    2014-01-01

    The International Seismological Centre, in collaboration with the Global Earthquake Model effort, has released a new global earthquake catalog, covering the time period from 1900 through the end of 2009. In order to use this catalog for global earthquake studies, I determined the magnitude of completeness (Mc) as a function of time by dividing the earthquakes shallower than 60 km into 7 time periods based on major changes in catalog processing and data availability and applying 4 objective methods to determine Mc, with uncertainties determined by non-parametric bootstrapping. Deeper events were divided into 2 time periods. Due to differences between the 4 methods, the final Mc was determined subjectively by examining the features that each method focused on in both the cumulative and binned magnitude frequency distributions. The time periods and Mc values for shallow events are: 1900-1917, Mc=7.7; 1918-1939, Mc=7.0; 1940-1954, Mc=6.8; 1955-1963, Mc=6.5; 1964-1975, Mc=6.0; 1976-2003, Mc=5.8; and 2004-2009, Mc=5.7. Using these Mc values for the longest time periods they are valid for (e.g. 1918-2009, 1940-2009,…) the shallow data fits a Gutenberg-Richter distribution with b=1.05 and a=8.3, within 1 standard deviation, with no declustering. The exception is for time periods that include 1900-1917 in which there are only 33 events with M≥ Mc and for those few data b=2.15±0.46. That result calls for further investigations for this time period, ideally having a larger number of earthquakes. For deep events, the results are Mc=7.1 for 1900-1963, although the early data are problematic; and Mc=5.7 for 1964-2009. For that later time period, b=0.99 and a=7.3.

  6. Exploring uncertainties in probabilistic seismic hazard estimates for Quito

    NASA Astrophysics Data System (ADS)

    Beauval, Celine; Yepes, Hugo; Audin, Laurence; Alvarado, Alexandra; Nocquet, Jean-Mathieu

    2016-04-01

    In the present study, probabilistic seismic hazard estimates at 475 years return period for Quito, capital city of Ecuador, show that the crustal host zone is the only source zone that determines the city's hazard levels for such return period. Therefore, the emphasis is put on identifying the uncertainties characterizing the host zone, i.e. uncertainties in the recurrence of earthquakes expected in the zone and uncertainties on the ground motions that these earthquakes may produce. As the number of local strong-ground motions is still scant, ground-motion prediction equations are imported from other regions. Exploring recurrence models for the host zone based on different observations and assumptions, and including three GMPE candidates (Akkar and Bommer 2010, Zhao et al. 2006, Boore and Atkinson 2008), we obtain a significant variability on the estimated acceleration at 475 years (site coordinates: -78.51 in longitude and -0.2 in latitude, VS30 760 m/s): 1) Considering historical earthquake catalogs, and relying on frequency-magnitude distributions where rates for magnitudes 6-7 are extrapolated from statistics of magnitudes 4.5-6.0 mostly in the 20th century, the acceleration at the PGA varies between 0.28g and 0.55g with a mean value around 0.4g. The results show that both the uncertainties in the GMPE choice and in the seismicity model are responsible for this variability. 2) Considering slip rates inferred form geodetic measurements across the Quito fault system, and assuming that most of the deformation occurs seismically (conservative hypothesis), leads to a much greater range of accelerations, 0.43 to 0.73g for the PGA (with a mean of 0.55g). 3) Considering slip rates inferred from geodetic measurements, and assuming that 50% only of the deformation is released in earthquakes (partially locked fault, model based on 15 years of GPS data), leads to a range of accelerations 0.32g to 0.58g for the PGA, with a mean of 0.42g. These accelerations are in agreement

  7. Earthquake mechanism and seafloor deformation for tsunami generation

    USGS Publications Warehouse

    Geist, Eric L.; Oglesby, David D.; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Tsunamis are generated in the ocean by rapidly displacing the entire water column over a significant area. The potential energy resulting from this disturbance is balanced with the kinetic energy of the waves during propagation. Only a handful of submarine geologic phenomena can generate tsunamis: large-magnitude earthquakes, large landslides, and volcanic processes. Asteroid and subaerial landslide impacts can generate tsunami waves from above the water. Earthquakes are by far the most common generator of tsunamis. Generally, earthquakes greater than magnitude (M) 6.5–7 can generate tsunamis if they occur beneath an ocean and if they result in predominantly vertical displacement. One of the greatest uncertainties in both deterministic and probabilistic hazard assessments of tsunamis is computing seafloor deformation for earthquakes of a given magnitude.

  8. Earthquake Damping Device for Steel Frame

    NASA Astrophysics Data System (ADS)

    Zamri Ramli, Mohd; Delfy, Dezoura; Adnan, Azlan; Torman, Zaida

    2018-04-01

    Structures such as buildings, bridges and towers are prone to collapse when natural phenomena like earthquake occurred. Therefore, many design codes are reviewed and new technologies are introduced to resist earthquake energy especially on building to avoid collapse. The tuned mass damper is one of the earthquake reduction products introduced on structures to minimise the earthquake effect. This study aims to analyse the effectiveness of tuned mass damper by experimental works and finite element modelling. The comparisons are made between these two models under harmonic excitation. Based on the result, it is proven that installing tuned mass damper will reduce the dynamic response of the frame but only in several input frequencies. At the highest input frequency applied, the tuned mass damper failed to reduce the responses. In conclusion, in order to use a proper design of damper, detailed analysis must be carried out to have sufficient design based on the location of the structures with specific ground accelerations.

  9. High-frequency spectral falloff of earthquakes, fractal dimension of complex rupture, b value, and the scaling of strength on faults

    USGS Publications Warehouse

    Frankel, A.

    1991-01-01

    The high-frequency falloff ??-y of earthquake displacement spectra and the b value of aftershock sequences are attributed to the character of spatially varying strength along fault zones. I assume that the high frequency energy of a main shock is produced by a self-similar distribution of subevents, where the number of subevents with radii greater than R is proportional to R-D, D being the fractal dimension. In the model, an earthquake is composed of a hierarchical set of smaller earthquakes. The static stress drop is parameterized to be proportional to R??, and strength is assumed to be proportional to static stress drop. I find that a distribution of subevents with D = 2 and stress drop independent of seismic moment (?? = 0) produces a main shock with an ??-2 falloff, if the subevent areas fill the rupture area of the main shock. By equating subevents to "islands' of high stress of a random, self-similar stress field on a fault, I relate D to the scaling of strength on a fault, such that D = 2 - ??. Thus D = 2 corresponds to constant stress drop scaling (?? = 0) and scale-invariant fault strength. A self-similar model of aftershock rupture zones on a fault is used to determine the relationship between the b value, the size distribution of aftershock rupture zones, and the scaling of strength on a fault. -from Author

  10. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    NASA Astrophysics Data System (ADS)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression

  11. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  12. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  13. Physics of Earthquake Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  14. VLF/LF Radio Sounding of Ionospheric Perturbations Associated with Earthquakes

    PubMed Central

    Hayakawa, Masashi

    2007-01-01

    It is recently recognized that the ionosphere is very sensitive to seismic effects, and the detection of ionospheric perturbations associated with earthquakes, seems to be very promising for short-term earthquake prediction. We have proposed a possible use of VLF/LF (very low frequency (3-30 kHz) /low frequency (30-300 kHz)) radio sounding of the seismo-ionospheric perturbations. A brief history of the use of subionospheric VLF/LF propagation for the short-term earthquake prediction is given, followed by a significant finding of ionospheric perturbation for the Kobe earthquake in 1995. After showing previous VLF/LF results, we present the latest VLF/LF findings; One is the statistical correlation of the ionospheric perturbation with earthquakes and the second is a case study for the Sumatra earthquake in December, 2004, indicating the spatical scale and dynamics of ionospheric perturbation for this earthquake.

  15. Characteristics of strong ground motion generation areas by fully dynamic earthquake cycles

    NASA Astrophysics Data System (ADS)

    Galvez, P.; Somerville, P.; Ampuero, J. P.; Petukhin, A.; Yindi, L.

    2016-12-01

    During recent subduction zone earthquakes (2010 Mw 8.8 Maule and 2011 Mw 9.0 Tohoku), high frequency ground motion radiation has been detected in deep regions of seismogenic zones. By semblance analysis of wave packets, Kurahashi & Irikura (2013) found strong ground motion generation areas (SMGAs) located in the down dip region of the 2011 Tohoku rupture. To reproduce the rupture sequence of SMGA's and replicate their rupture time and ground motions, we extended previous work on dynamic rupture simulations with slip reactivation (Galvez et al, 2016). We adjusted stresses on the most southern SMGAs of Kurahashi & Irikura (2013) model to reproduce the observed peak ground velocity recorded at seismic stations along Japan for periods up to 5 seconds. To generate higher frequency ground motions we input the rupture time, final slip and slip velocity of the dynamic model into the stochastic ground motion generator of Graves & Pitarka (2010). Our results are in agreement with the ground motions recorded at the KiK-net and K-NET stations.While we reproduced the recorded ground motions of the 2011 Tohoku event, it is unknown whether the characteristics and location of SMGA's will persist in future large earthquakes in this region. Although the SMGA's have large peak slip velocities, the areas of largest final slip are located elsewhere. To elucidate whether this anti-correlation persists in time, we conducted earthquake cycle simulations and analysed the spatial correlation of peak slip velocities, stress drops and final slip of main events. We also investigated whether or not the SMGA's migrate to other regions of the seismic zone.To perform this study, we coupled the quasi-dynamic boundary element solver QDYN (Luo & Ampuero, 2015) and the dynamic spectral element solver SPECFEM3D (Galvez et al., 2014; 2016). The workflow alternates between inter-seismic periods solved with QDYN and coseismic periods solved with SPECFEM3D, with automated switch based on slip rate

  16. Earthquake and ambient vibration monitoring of the steel-frame UCLA factor building

    USGS Publications Warehouse

    Kohler, M.D.; Davis, P.M.; Safak, E.

    2005-01-01

    Dynamic property measurements of the moment-resisting steel-frame University of California, Los Angeles, Factor building are being made to assess how forces are distributed over the building. Fourier amplitude spectra have been calculated from several intervals of ambient vibrations, a 24-hour period of strong winds, and from the 28 March 2003 Encino, California (ML = 2.9), the 3 September 2002 Yorba Linda, California (ML = 4.7), and the 3 November 2002 Central Alaska (Mw = 7.9) earthquakes. Measurements made from the ambient vibration records show that the first-mode frequency of horizontal vibration is between 0.55 and 0.6 Hz. The second horizontal mode has a frequency between 1.6 and 1.9 Hz. In contrast, the first-mode frequencies measured from earthquake data are about 0.05 to 0.1 Hz lower than those corresponding to ambient vibration recordings indicating softening of the soil-structure system as amplitudes become larger. The frequencies revert to pre-earthquake levels within five minutes of the Yorba Linda earthquake. Shaking due to strong winds that occurred during the Encino earthquake dominates the frequency decrease, which correlates in time with the duration of the strong winds. The first shear wave recorded from the Encino and Yorba Linda earthquakes takes about 0.4 sec to travel up the 17-story building. ?? 2005, Earthquake Engineering Research Institute.

  17. Rupture, waves and earthquakes.

    PubMed

    Uenishi, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  18. Rupture, waves and earthquakes

    PubMed Central

    UENISHI, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but “extraordinary” phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable. PMID:28077808

  19. Managing Uncertainty in Water Infrastructure Design Using Info-gap Robustness

    NASA Astrophysics Data System (ADS)

    Irias, X.; Cicala, D.

    2013-12-01

    Info-gap theory, a tool for managing deep uncertainty, can be of tremendous value for design of water systems in areas of high seismic risk. Maintaining reliable water service in those areas is subject to significant uncertainties including uncertainty of seismic loading, unknown seismic performance of infrastructure, uncertain costs of innovative seismic-resistant construction, unknown costs to repair seismic damage, unknown societal impacts from downtime, and more. Practically every major earthquake that strikes a population center reveals additional knowledge gaps. In situations of such deep uncertainty, info-gap can offer advantages over traditional approaches, whether deterministic approaches that use empirical safety factors to address the uncertainties involved, or probabilistic methods that attempt to characterize various stochastic properties and target a compromise between cost and reliability. The reason is that in situations of deep uncertainty, it may not be clear what safety factor would be reasonable, or even if any safety factor is sufficient to address the uncertainties, and we may lack data to characterize the situation probabilistically. Info-gap is a tool that recognizes up front that our best projection of the future may be wrong. Thus, rather than seeking a solution that is optimal for that projection, info-gap seeks a solution that works reasonably well for all plausible conditions. In other words, info-gap seeks solutions that are robust in the face of uncertainty. Info-gap has been used successfully across a wide range of disciplines including climate change science, project management, and structural design. EBMUD is currently using info-gap to help it gain insight into possible solutions for providing reliable water service to an island community within its service area. The island, containing about 75,000 customers, is particularly vulnerable to water supply disruption from earthquakes, since it has negligible water storage and is

  20. Bayesian exploration of recent Chilean earthquakes

    NASA Astrophysics Data System (ADS)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah

    2016-04-01

    The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.

  1. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  2. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    USGS Publications Warehouse

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  3. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  4. Frequency Spectrum Method-Based Stress Analysis for Oil Pipelines in Earthquake Disaster Areas

    PubMed Central

    Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao

    2015-01-01

    When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline. PMID:25692790

  5. Frequency spectrum method-based stress analysis for oil pipelines in earthquake disaster areas.

    PubMed

    Wu, Xiaonan; Lu, Hongfang; Huang, Kun; Wu, Shijuan; Qiao, Weibiao

    2015-01-01

    When a long distance oil pipeline crosses an earthquake disaster area, inertial force and strong ground motion can cause the pipeline stress to exceed the failure limit, resulting in bending and deformation failure. To date, researchers have performed limited safety analyses of oil pipelines in earthquake disaster areas that include stress analysis. Therefore, using the spectrum method and theory of one-dimensional beam units, CAESAR II is used to perform a dynamic earthquake analysis for an oil pipeline in the XX earthquake disaster area. This software is used to determine if the displacement and stress of the pipeline meet the standards when subjected to a strong earthquake. After performing the numerical analysis, the primary seismic action axial, longitudinal and horizontal displacement directions and the critical section of the pipeline can be located. Feasible project enhancement suggestions based on the analysis results are proposed. The designer is able to utilize this stress analysis method to perform an ultimate design for an oil pipeline in earthquake disaster areas; therefore, improving the safe operation of the pipeline.

  6. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  7. Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA

    NASA Astrophysics Data System (ADS)

    Lorito, S.

    2013-05-01

    The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit

  8. Spatial variations in the frequency-magnitude distribution of earthquakes at Mount Pinatubo volcano

    USGS Publications Warehouse

    Sanchez, J.J.; McNutt, S.R.; Power, J.A.; Wyss, M.

    2004-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is mapped in two and three dimensions at Mount Pinatubo, Philippines, to a depth of 14 km below the summit. We analyzed 1406 well-located earthquakes with magnitudes MD ???0.73, recorded from late June through August 1991, using the maximum likelihood method. We found that b-values are higher than normal (b = 1.0) and range between b = 1.0 and b = 1.8. The computed b-values are lower in the areas adjacent to and west-southwest of the vent, whereas two prominent regions of anomalously high b-values (b ??? 1.7) are resolved, one located 2 km northeast of the vent between 0 and 4 km depth and a second located 5 km southeast of the vent below 8 km depth. The statistical differences between selected regions of low and high b-values are established at the 99% confidence level. The high b-value anomalies are spatially well correlated with low-velocity anomalies derived from earlier P-wave travel-time tomography studies. Our dataset was not suitable for analyzing changes in b-values as a function of time. We infer that the high b-value anomalies around Mount Pinatubo are regions of increased crack density, and/or high pore pressure, related to the presence of nearby magma bodies.

  9. Anomalous behavior of the ionosphere before strong earthquakes

    NASA Astrophysics Data System (ADS)

    Peddi Naidu, P.; Madhavi Latha, T.; Madhusudhana Rao, D. N.; Indira Devi, M.

    2017-12-01

    In the recent years, the seismo-ionospheric coupling has been studied using various ionospheric parameters like Total Electron Content, Critical frequencies, Electron density and Phase and amplitude of Very Low Frequency waves. The present study deals with the behavior of the ionosphere in the pre-earthquake period of 3-4 days at various stations adopting the critical frequencies of Es and F2 layers. The relative phase measurements of 16 kHz VLF wave transmissions from Rugby (UK), received at Visakhapatnam (India) are utilized to study the D-region during the seismically active periods. The results show that, f0Es increases a few hours before the time of occurrence of the earthquake and day time values f0F2 are found to be high during the sunlit hours in the pre-earthquake period of 2-3 days. Anomalous VLF phase fluctuations are observed during the sunset hours before the earthquake event. The results are discussed in the light of the probable mechanism proposed by previous investigators.

  10. Comparing methods for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin

    2017-04-01

    There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).

  11. Intraplate triggered earthquakes: Observations and interpretation

    USGS Publications Warehouse

    Hough, S.E.; Seeber, L.; Armbruster, J.G.

    2003-01-01

    We present evidence that at least two of the three 1811-1812 New Madrid, central United States, mainshocks and the 1886 Charleston, South Carolina, earthquake triggered earthquakes at regional distances. In addition to previously published evidence for triggered earthquakes in the northern Kentucky/southern Ohio region in 1812, we present evidence suggesting that triggered events might have occurred in the Wabash Valley, to the south of the New Madrid Seismic Zone, and near Charleston, South Carolina. We also discuss evidence that earthquakes might have been triggered in northern Kentucky within seconds of the passage of surface waves from the 23 January 1812 New Madrid mainshock. After the 1886 Charleston earthquake, accounts suggest that triggered events occurred near Moodus, Connecticut, and in southern Indiana. Notwithstanding the uncertainty associated with analysis of historical accounts, there is evidence that at least three out of the four known Mw 7 earthquakes in the central and eastern United States seem to have triggered earthquakes at distances beyond the typically assumed aftershock zone of 1-2 mainshock fault lengths. We explore the possibility that remotely triggered earthquakes might be common in low-strain-rate regions. We suggest that in a low-strain-rate environment, permanent, nonelastic deformation might play a more important role in stress accumulation than it does in interplate crust. Using a simple model incorporating elastic and anelastic strain release, we show that, for realistic parameter values, faults in intraplate crust remain close to their failure stress for a longer part of the earthquake cycle than do faults in high-strain-rate regions. Our results further suggest that remotely triggered earthquakes occur preferentially in regions of recent and/or future seismic activity, which suggests that faults are at a critical stress state in only some areas. Remotely triggered earthquakes may thus serve as beacons that identify regions of

  12. Using Low-Frequency Earthquakes to Investigate Slow Slip Processes and Plate Interface Structure Beneath the Olympic Peninsula, WA

    NASA Astrophysics Data System (ADS)

    Chestler, Shelley

    This dissertation seeks to further understand the LFE source process, the role LFEs play in generating slow slip, and the utility of using LFEs to examine plate interface structure. The work involves the creation and investigation of a 2-year-long catalog of low-frequency earthquakes beneath the Olympic Peninsula, Washington. In the first chapter, we calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, WA. LFE moments range from 1.4x1010- 1.9x1012 N-m (M W=0.7-2.1). While regular earthquakes follow a power-law moment-frequency distribution with a b-value near 1 (the number of events increases by a factor of 10 for each unit increase in MW), we find that while for large LFEs the b-value is ˜6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0x1011 N-m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, sub-patch diameters, stress drops, and slip rates for LFEs during ETS events. We allow for LFEs to rupture smaller sub-patches within the LFE family patch. Models with 1-10 sub-patches produce slips of 0.1-1 mm, sub-patch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one sub-patch is often assumed, we believe 3-10 sub-patches are more likely. In the second chapter, using high-resolution relative low-frequency

  13. Multiple geophysical observations indicate possible splay fault activation during the 2006 Java Tsunami earthquake

    NASA Astrophysics Data System (ADS)

    Fan, W.; Bassett, D.; Denolle, M.; Shearer, P. M.; Ji, C.; Jiang, J.

    2017-12-01

    The 2006 Mw 7.8 Java earthquake was a tsunami earthquake, exhibiting frequency-dependent seismic radiation along strike. High-frequency global back-projection results suggest two distinct rupture stages. The first stage lasted 65 s with a rupture speed of 1.2 km/s, while the second stage lasted from 65 to 150 s with a rupture speed of 2.7 km/s. In addition, P-wave high-frequency radiated energy and fall-off rates indicate a rupture transition at 60 s. High-frequency radiators resolved with back-projection during the second stage spatially correlate with splay fault traces mapped from residual free-air gravity anomalies. These splay faults also collocate with a major tsunami source associated with the earthquake inferred from tsunami first-crest back-propagation simulation. These correlations suggest that the splay faults may have been reactivated during the Java earthquake, as has been proposed for other tsunamigenic earthquakes, such as the 1944 Mw 8.1 Tonankai earthquake in the Nankai Trough.

  14. High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning

    PubMed Central

    Sun, Wen; Patil, Sachin; Alterovitz, Ron

    2015-01-01

    As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645

  15. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  16. Synthesis of High-Frequency Ground Motion Using Information Extracted from Low-Frequency Ground Motion

    NASA Astrophysics Data System (ADS)

    Iwaki, A.; Fujiwara, H.

    2012-12-01

    Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion

  17. On Strong Positive Frequency Dependencies of Quality Factors in Local-Earthquake Seismic Studies

    NASA Astrophysics Data System (ADS)

    Morozov, Igor B.; Jhajhria, Atul; Deng, Wubing

    2018-03-01

    Many observations of seismic waves from local earthquakes are interpreted in terms of the frequency-dependent quality factor Q( f ) = Q0 f^{η } , where η is often close to or exceeds one. However, such steep positive frequency dependencies of Q require careful analysis with regard to their physical consistency. In particular, the case of η = 1 corresponds to frequency-independent (elastic) amplitude decays with time and consequently requires no Q-type attenuation mechanisms. For η > 1, several problems with physical meanings of such Q-factors occur. First, contrary to the key premise of seismic attenuation, high-frequency parts of the wavefield are enhanced with increasing propagation times relative to the low-frequency ones. Second, such attenuation cannot be implemented by mechanical models of wave-propagating media. Third, with η > 1, the velocity dispersion associated with such Q(f) occurs over unrealistically short frequency range and has an unexpected oscillatory shape. Cases η = 1 and η > 1 are usually attributed to scattering; however, this scattering must exhibit fortuitous tuning into the observation frequency band, which appears unlikely. The reason for the above problems is that the inferred Q values are affected by the conventional single-station measurement procedure. Both parameters Q 0 and are apparent, i.e., dependent on the selected parameterization and inversion method, and they should not be directly attributed to the subsurface. For η ≈ 1, parameter Q 0 actually describes the frequency-independent amplitude decay in access of some assumed geometric spreading t -α , where α is usually taken equal one. The case η > 1 is not allowed physically and could serve as an indicator of problematic interpretations. Although the case 0 < η < 1 is possible, its parameters Q 0 and may also be biased by the measurement procedure. To avoid such difficulties of Q-based approaches, we recommend measuring and interpreting the amplitude-decay rates

  18. Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.

    2016-06-01

    We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.

  19. A Model for Low-Frequency Earthquake Slip

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-12-01

    Using high-resolution relative low-frequency earthquake (LFE) locations, we calculate the patch areas (Ap) of LFE families. During episodic tremor and slip (ETS) events, we define AT as the area that slips during LFEs and ST as the total amount of summed LFE slip. Using observed and calculated values for AP, AT, and ST, we evaluate two end-member models for LFE slip within an LFE family patch. In the ductile matrix model, LFEs produce 100% of the observed ETS slip (SETS) in distinct subpatches (i.e., AT ≪ AP). In the connected patch model, AT = AP, but ST ≪ SETS. LFEs cluster into 45 LFE families. Spatial gaps (˜10 to 20 km) between LFE family clusters and smaller gaps within LFE family clusters serve as evidence that LFE slip is heterogeneous on multiple spatial scales. We find that LFE slip only accounts for ˜0.2% of the slip within the slow slip zone. There are depth-dependent trends in the characteristic (mean) moment and in the number of LFEs during both ETS events (only) and the entire ETS cycle (Mcets and NTets and Mcall and NTall, respectively). During ETS, Mc decreases with downdip distance but NT does not change. Over the entire ETS cycle, Mc decreases with downdip distance, but NT increases. These observations indicate that deeper LFE slip occurs through a larger number (800-1,200) of small LFEs, while updip LFE slip occurs primarily during ETS events through a smaller number (200-600) of larger LFEs. This could indicate that the plate interface is stronger and has a higher stress threshold updip.

  20. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  1. Two grave issues concerning the expected Tokai Earthquake

    NASA Astrophysics Data System (ADS)

    Mogi, K.

    2004-08-01

    The possibility of a great shallow earthquake (M 8) in the Tokai region, central Honshu, in the near future was pointed out by Mogi in 1969 and by the Coordinating Committee for Earthquake Prediction (CCEP), Japan (1970). In 1978, the government enacted the Large-Scale Earthquake Countermeasures Law and began to set up intensified observations in this region for short-term prediction of the expected Tokai earthquake. In this paper, two serious issues are pointed out, which may contribute to catastrophic effects in connection with the Tokai earthquake: 1. The danger of black-and-white predictions: According to the scenario based on the Large-Scale Earthquake Countermeasures Law, if abnormal crustal changes are observed, the Earthquake Assessment Committee (EAC) will determine whether or not there is an imminent danger. The findings are reported to the Prime Minister who decides whether to issue an official warning statement. Administrative policy clearly stipulates the measures to be taken in response to such a warning, and because the law presupposes the ability to predict a large earthquake accurately, there are drastic measures appropriate to the situation. The Tokai region is a densely populated region with high social and economic activity, and it is traversed by several vital transportation arteries. When a warning statement is issued, all transportation is to be halted. The Tokyo capital region would be cut off from the Nagoya and Osaka regions, and there would be a great impact on all of Japan. I (the former chairman of EAC) maintained that in view of the variety and complexity of precursory phenomena, it was inadvisable to attempt a black-and-white judgment as the basis for a "warning statement". I urged that the government adopt a "soft warning" system that acknowledges the uncertainty factor and that countermeasures be designed with that uncertainty in mind. 2. The danger of nuclear power plants in the focal region: Although the possibility of the

  2. Urban Landslides Induced by the 2004 Niigata-Chuetsu Earthquake

    NASA Astrophysics Data System (ADS)

    Kamai, T.; Trandafir, A. C.; Sidle, R. C.

    2005-05-01

    in the case of embankment thicknesses <8 m. The natural frequency at the shoulder of embankment slopes is inversely proportion to fill thickness; less than the predominant frequency of the earthquake (3 Hz). Shallow embankments were unstable compared to deeper embankments because their natural frequency was close to the predominant frequency of the earthquake. Thus, we expect widespread embankment failures in megacities during the next giant earthquake shaking with low predominant frequency originating offshore of the Pacific Ocean.

  3. Global earthquake fatalities and population

    USGS Publications Warehouse

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  4. Maximum Magnitude and Probabilities of Induced Earthquakes in California Geothermal Fields: Applications for a Science-Based Decision Framework

    NASA Astrophysics Data System (ADS)

    Weiser, Deborah Anne

    Induced seismicity is occurring at increasing rates around the country. Brodsky and Lajoie (2013) and others have recognized anthropogenic quakes at a few geothermal fields in California. I use three techniques to assess if there are induced earthquakes in California geothermal fields; there are three sites with clear induced seismicity: Brawley, The Geysers, and Salton Sea. Moderate to strong evidence is found at Casa Diablo, Coso, East Mesa, and Susanville. Little to no evidence is found for Heber and Wendel. I develop a set of tools to reduce or cope with the risk imposed by these earthquakes, and also to address uncertainties through simulations. I test if an earthquake catalog may be bounded by an upper magnitude limit. I address whether the earthquake record during pumping time is consistent with the past earthquake record, or if injection can explain all or some of the earthquakes. I also present ways to assess the probability of future earthquake occurrence based on past records. I summarize current legislation for eight states where induced earthquakes are of concern. Unlike tectonic earthquakes, the hazard from induced earthquakes has the potential to be modified. I discuss direct and indirect mitigation practices. I present a framework with scientific and communication techniques for assessing uncertainty, ultimately allowing more informed decisions to be made.

  5. SITE AMPLIFICATION OF EARTHQUAKE GROUND MOTION.

    USGS Publications Warehouse

    Hays, Walter W.

    1986-01-01

    When analyzing the patterns of damage in an earthquake, physical parameters of the total earthquake-site-structure system are correlated with the damage. Soil-structure interaction, the cause of damage in many earthquakes, involves the frequency-dependent response of both the soil-rock column and the structure. The response of the soil-rock column (called site amplification) is controversial because soil has strain-dependent properties that affect the way the soil column filters the input body and surface seismic waves, modifying the amplitude and phase spectra and the duration of the surface ground motion.

  6. Why earthquakes correlate weakly with the solid Earth tides: Effects of periodic stress on the rate and probability of earthquake occurrence

    USGS Publications Warehouse

    Beeler, N.M.; Lockner, D.A.

    2003-01-01

    We provide an explanation why earthquake occurrence does not correlate well with the daily solid Earth tides. The explanation is derived from analysis of laboratory experiments in which faults are loaded to quasiperiodic failure by the combined action of a constant stressing rate, intended to simulate tectonic loading, and a small sinusoidal stress, analogous to the Earth tides. Event populations whose failure times correlate with the oscillating stress show two modes of response; the response mode depends on the stressing frequency. Correlation that is consistent with stress threshold failure models, e.g., Coulomb failure, results when the period of stress oscillation exceeds a characteristic time tn; the degree of correlation between failure time and the phase of the driving stress depends on the amplitude and frequency of the stress oscillation and on the stressing rate. When the period of the oscillating stress is less than tn, the correlation is not consistent with threshold failure models, and much higher stress amplitudes are required to induce detectable correlation with the oscillating stress. The physical interpretation of tn is the duration of failure nucleation. Behavior at the higher frequencies is consistent with a second-order dependence of the fault strength on sliding rate which determines the duration of nucleation and damps the response to stress change at frequencies greater than 1/tn. Simple extrapolation of these results to the Earth suggests a very weak correlation of earthquakes with the daily Earth tides, one that would require >13,000 earthquakes to detect. On the basis of our experiments and analysis, the absence of definitive daily triggering of earthquakes by the Earth tides requires that for earthquakes, tn exceeds the daily tidal period. The experiments suggest that the minimum typical duration of earthquake nucleation on the San Andreas fault system is ???1 year.

  7. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    NASA Astrophysics Data System (ADS)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of

  8. A new reference global instrumental earthquake catalogue (1900-2009)

    NASA Astrophysics Data System (ADS)

    Di Giacomo, D.; Engdahl, B.; Bondar, I.; Storchak, D. A.; Villasenor, A.; Bormann, P.; Lee, W.; Dando, B.; Harris, J.

    2011-12-01

    For seismic hazard studies on a global and/or regional scale, accurate knowledge of the spatial distribution of seismicity, the magnitude-frequency relation and the maximum magnitudes is of fundamental importance. However, such information is normally not homogeneous (or not available) for the various seismically active regions of the Earth. To achieve the GEM objectives (www.globalquakemodel.org) of calculating and communicating earthquake risk worldwide, an improved reference global instrumental catalogue for large earthquakes spanning the entire 100+ years period of instrumental seismology is an absolute necessity. To accomplish this task, we apply the most up-to-date techniques and standard observatory practices for computing the earthquake location and magnitude. In particular, the re-location procedure benefits both from the depth determination according to Engdahl and Villaseñor (2002), and the advanced technique recently implemented at the ISC (Bondár and Storchak, 2011) to account for correlated error structure. With regard to magnitude, starting from the re-located hypocenters, the classical surface and body-wave magnitudes are determined following the new IASPEI standards and by using amplitude-period data of phases collected from historical station bulletins (up to 1970), which were not available in digital format before the beginning of this work. Finally, the catalogue will provide moment magnitude values (including uncertainty) for each seismic event via seismic moment, via surface wave magnitude or via other magnitude types using empirical relationships. References Engdahl, E.R., and A. Villaseñor (2002). Global seismicity: 1900-1999. In: International Handbook of Earthquake and Engineering Seismology, eds. W.H.K. Lee, H. Kanamori, J.C. Jennings, and C. Kisslinger, Part A, 665-690, Academic Press, San Diego. Bondár, I., and D. Storchak (2011). Improved location procedures at the International Seismological Centre, Geophys. J. Int., doi:10.1111/j

  9. Variations of the critical foE-frequency of the ionosphere connected with earthquakes. Evaluation of observations of the vertical sounding station "Tokyo"

    NASA Astrophysics Data System (ADS)

    Liperovskaya, Elena V.; Meister, Claudia-Veronika; Hoffmann, Dieter H. H.; Silina, Alexandra S.

    2016-04-01

    In the present work the critical frequencies foE and foF2 of the ionosphere are considered as possible earthquake precursors. The statistical analysis of the critical frequencies is carried out based on the data of the vertical sounding station (VSS) "Kokubunji" ("Tokyo") (ϕ = 35.7o N, λ = 139.5o E, 1957-1988) obtained every hour. Disturbances are considered on the background of seasonal, geomagnetic as well as 11-years and 27-days Solar variations. Special normalized parameters E and F are introduced, which represent the almost seasonal-independent parts of foE and foF2. Days with high Solar (Wolf number > 100) and geomagnetic (ΣKp > 25) activities are excluded from the analysis. For all data (observed every hour) analysed, no correlations of the normalized parameters E and F are found. One day before the seismic shock, a positive correlation is observed. The superimposed epochs method is used to determine the temporal behaviour of E and F. It is found that E and F decrease one day before the earthquakes provided that the seismic shocks occur at distances 600 < R < 1000 km from the VSS, and that the focus of earthquakes with magnitudes M > 5.5 is situated at depths smaller than 60 km. The reliability of the effect is larger than 98 %. Possible physical mechanisms of the phenomenon are discussed.

  10. Bibliographical search for reliable seismic moments of large earthquakes during 1900-1979 to compute MW in the ISC-GEM Global Instrumental Reference Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Lee, William H. K.; Engdahl, E. Robert

    2015-02-01

    Moment magnitude (MW) determinations from the online GCMT Catalogue of seismic moment tensor solutions (GCMT Catalog, 2011) have provided the bulk of MW values in the ISC-GEM Global Instrumental Reference Earthquake Catalogue (1900-2009) for almost all moderate-to-large earthquakes occurring after 1975. This paper describes an effort to determine MW of large earthquakes that occurred prior to the start of the digital seismograph era, based on credible assessments of thousands of seismic moment (M0) values published in the scientific literature by hundreds of individual authors. MW computed from the published M0 values (for a time period more than twice that of the digital era) are preferable to proxy MW values, especially for earthquakes with MW greater than about 8.5, for which MS is known to be underestimated or "saturated". After examining 1,123 papers, we compile a database of seismic moments and related information for 1,003 earthquakes with published M0 values, of which 967 were included in the ISC-GEM Catalogue. The remaining 36 earthquakes were not included in the Catalogue due to difficulties in their relocation because of inadequate arrival time information. However, 5 of these earthquakes with bibliographic M0 (and thus MW) are included in the Catalogue's Appendix. A search for reliable seismic moments was not successful for earthquakes prior to 1904. For each of the 967 earthquakes a "preferred" seismic moment value (if there is more than one) was selected and its uncertainty was estimated according to the data and method used. We used the IASPEI formula (IASPEI, 2005) to compute direct moment magnitudes (MW[M0]) based on the seismic moments (M0), and assigned their errors based on the uncertainties of M0. From 1900 to 1979, there are 129 great or near great earthquakes (MW ⩾ 7.75) - the bibliographic search provided direct MW values for 86 of these events (or 67%), the GCMT Catalog provided direct MW values for 8 events (or 6%), and the remaining 35

  11. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  12. Stress drops of induced and tectonic earthquakes in the central United States are indistinguishable.

    PubMed

    Huang, Yihe; Ellsworth, William L; Beroza, Gregory C

    2017-08-01

    Induced earthquakes currently pose a significant hazard in the central United States, but there is considerable uncertainty about the severity of their ground motions. We measure stress drops of 39 moderate-magnitude induced and tectonic earthquakes in the central United States and eastern North America. Induced earthquakes, more than half of which are shallower than 5 km, show a comparable median stress drop to tectonic earthquakes in the central United States that are dominantly strike-slip but a lower median stress drop than that of tectonic earthquakes in the eastern North America that are dominantly reverse-faulting. This suggests that ground motion prediction equations developed for tectonic earthquakes can be applied to induced earthquakes if the effects of depth and faulting style are properly considered. Our observation leads to the notion that, similar to tectonic earthquakes, induced earthquakes are driven by tectonic stresses.

  13. Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity

    NASA Astrophysics Data System (ADS)

    Jian, P.; Hung, S.; Tseng, T.

    2013-12-01

    Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid

  14. Pre-Earthquake Unipolar Electromagnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Freund, F.

    2013-12-01

    Transient ultralow frequency (ULF) electromagnetic (EM) emissions have been reported to occur before earthquakes [1,2]. They suggest powerful transient electric currents flowing deep in the crust [3,4]. Prior to the M=5.4 Alum Rock earthquake of Oct. 21, 2007 in California a QuakeFinder triaxial search-coil magnetometer located about 2 km from the epicenter recorded unusual unipolar pulses with the approximate shape of a half-cycle of a sine wave, reaching amplitudes up to 30 nT. The number of these unipolar pulses increased as the day of the earthquake approached. These pulses clearly originated around the hypocenter. The same pulses have since been recorded prior to several medium to moderate earthquakes in Peru, where they have been used to triangulate the location of the impending earthquakes [5]. To understand the mechanism of the unipolar pulses, we first have to address the question how single current pulses can be generated deep in the Earth's crust. Key to this question appears to be the break-up of peroxy defects in the rocks in the hypocenter as a result of the increase in tectonic stresses prior to an earthquake. We investigate the mechanism of the unipolar pulses by coupling the drift-diffusion model of semiconductor theory to Maxwell's equations, thereby producing a model describing the rock volume that generates the pulses in terms of electromagnetism and semiconductor physics. The system of equations is then solved numerically to explore the electromagnetic radiation associated with drift-diffusion currents of electron-hole pairs. [1] Sharma, A. K., P. A. V., and R. N. Haridas (2011), Investigation of ULF magnetic anomaly before moderate earthquakes, Exploration Geophysics 43, 36-46. [2] Hayakawa, M., Y. Hobara, K. Ohta, and K. Hattori (2011), The ultra-low-frequency magnetic disturbances associated with earthquakes, Earthquake Science, 24, 523-534. [3] Bortnik, J., T. E. Bleier, C. Dunson, and F. Freund (2010), Estimating the seismotelluric current

  15. Warning and prevention based on estimates with large uncertainties: the case of low-frequency and large-impact events like tsunamis

    NASA Astrophysics Data System (ADS)

    Tinti, Stefano; Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo

    2013-04-01

    Geoscientists deal often with hazardous processes like earthquakes, volcanic eruptions, tsunamis, hurricanes, etc., and their research is aimed not only to a better understanding of the physical processes, but also to provide assessment of the space and temporal evolution of a given individual event (i.e. to provide short-term prediction) and of the expected evolution of a group of events (i.e. to provide statistical estimates referred to a given return period, and a given geographical area). One of the main issues of any scientific method is how to cope with measurement errors, a topic which in case of forecast of ongoing or of future events translates into how to deal with forecast uncertainties. In general, the more data are available and processed to make a prediction, the more accurate the prediction is expected to be if the scientific approach is sound, and the smaller the associated uncertainties are. However, there are several important cases where assessment is to be made with insufficient data or insufficient time for processing, which leads to large uncertainties. Two examples can be given taken from tsunami science, since tsunamis are rare events that may have destructive power and very large impact. One example is the case of warning for a tsunami generated by a near-coast earthquake, which is an issue at the focus of the European funded project NearToWarn. Warning has to be launched before tsunami hits the coast, that is in a few minutes after its generation. This may imply that data collected in such a short time are not yet enough for an accurate evaluation, also because the implemented monitoring system (if any) could be inadequate (f.i. one reason of inadequacy could be that implementing a dense instrumental network could be judged too expensive for rare events) The second case is the long term prevention from tsunami strikes. Tsunami infrequency may imply that the historical record for a given piece of coast is too short to capture a statistical

  16. Typhoon-driven landsliding induces earthquakes: example of the 2009 Morakot typhoon

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Jeandet, Louise; Cubas, Nadaya; Marc, Odin; Meunier, Patrick; Hovius, Niels; Simoes, Martine; Cattin, Rodolphe; Shyu, J. Bruce H.; Liang, Wen-Tzong; Theunissen, Thomas; Chiang, Shou-Hao

    2017-04-01

    Extreme rainfall events can trigger numerous landslides in mountainous areas and a prolonged increase of river sediment load. The resulting mass transfer at the Earth surface in turn induces stress changes at depth, which could be sufficient to trigger shallow earthquakes. The 2009 Morakot typhoon represents a good case study as it delivered 3 m of precipitation in 3 days and caused some of the most intense erosion ever recorded. Analysis of seismicity time-series before and after the Morakot typhoon reveals a systematic increase of shallow (i.e. 0-15 km of depth) earthquake frequency in the vicinity of the areas displaying a high spatial density of landslides. This step-like increase in frequency lasts for at least 2-3 years and does not follow an Omori-type aftershock sequence. Rather, it is associated to a step change of the Gutenberg-Richter b-value of the earthquake catalog. Both changes occurred in mountainous areas of southwest Taiwan, where typhoon Morakot caused extensive landsliding. These spatial and temporal correlations strongly suggest a causal relationship between the Morakot-triggered landslides and the increase of earthquake frequency and their associated b-value. We propose that the progressive removal of landslide materials from the steep mountain landscape by river sediment transport acts as an approximately constant increase of the stress rate with respect to pre-typhoon conditions, and that this in turn causes a step-wise increase in earthquake frequency. To test this hypothesis, we investigate the response of a rate-and-state fault to stress changes using a 2-D continuum elasto-dynamic model. Consistent with the results of Ader et al. (2013), our preliminary results show a step-like increase of earthquake frequency in response to a step-like decrease of the fault normal stress. We also investigate the sensitivity of the amplitude and time-scale of the earthquake frequency increase to the amplitude of the normal stress change and to

  17. The epistemic and aleatory uncertainties of the ETAS-type models: an application to the Central Italy seismicity.

    PubMed

    Lombardi, A M

    2017-09-18

    Stochastic models provide quantitative evaluations about the occurrence of earthquakes. A basic component of this type of models are the uncertainties in defining main features of an intrinsically random process. Even if, at a very basic level, any attempting to distinguish between types of uncertainty is questionable, an usual way to deal with this topic is to separate epistemic uncertainty, due to lack of knowledge, from aleatory variability, due to randomness. In the present study this problem is addressed in the narrow context of short-term modeling of earthquakes and, specifically, of ETAS modeling. By mean of an application of a specific version of the ETAS model to seismicity of Central Italy, recently struck by a sequence with a main event of Mw6.5, the aleatory and epistemic (parametric) uncertainty are separated and quantified. The main result of the paper is that the parametric uncertainty of the ETAS-type model, adopted here, is much lower than the aleatory variability in the process. This result points out two main aspects: an analyst has good chances to set the ETAS-type models, but he may retrospectively describe and forecast the earthquake occurrences with still limited precision and accuracy.

  18. Constraining the source location of the 30 May 2015 (Mw 7.9) Bonin deep-focus earthquake using seismogram envelopes of high-frequency P waveforms: Occurrence of deep-focus earthquake at the bottom of a subducting slab

    NASA Astrophysics Data System (ADS)

    Takemura, Shunsuke; Maeda, Takuto; Furumura, Takashi; Obara, Kazushige

    2016-05-01

    In this study, the source location of the 30 May 2015 (Mw 7.9) deep-focus Bonin earthquake was constrained using P wave seismograms recorded across Japan. We focus on propagation characteristics of high-frequency P wave. Deep-focus intraslab earthquakes typically show spindle-shaped seismogram envelopes with peak delays of several seconds and subsequent long-duration coda waves; however, both the main shock and aftershock of the 2015 Bonin event exhibited pulse-like P wave propagations with high apparent velocities (~12.2 km/s). Such P wave propagation features were reproduced by finite-difference method simulations of seismic wave propagation in the case of slab-bottom source. The pulse-like P wave seismogram envelopes observed from the 2015 Bonin earthquake show that its source was located at the bottom of the Pacific slab at a depth of ~680 km, rather than within its middle or upper regions.

  19. Insights into the origins of drumbeat earthquakes, periodic low frequency seismicity, and plug degradation from multi-instrument monitoring at Tungurahua volcano, Ecuador, April 2015

    NASA Astrophysics Data System (ADS)

    Bell, Andrew; Hernandez, Stephen; Gaunt, Elizabeth; Mothes, Patricia; Hidalgo, Silvana; Ruiz, Mario

    2016-04-01

    Highly-periodic repeating 'drumbeat' earthquakes have been reported from several andesitic and dacitic volcanoes. Physical models for the origin of drumbeat earthquakes incorporate, to different extents, the incremental upward movement of viscous magma. However, the roles played by stick-slip friction, brittle failure, and fluid flow, and the relations between drumbeat earthquakes and other low-frequency seismic signals, remain controversial. Here we report the results of analysis of three weeks of geophysical data recorded during an unrest episode at Tungurahua, an andesitic stratovolcano in Ecuador, during April 2015, by the monitoring network of the Instituto Geofisico of Ecuador. Combined seismic, geodetic, infrasound, and gas monitoring has provided new insights into the origins of periodic low-frequency seismic signals, conduit processes, and the nature of current unrest. Over the three-week period, the relative seismic amplitude (RSAM) correlated closely with short-term deformation rates and gas fluxes. However, the characteristics of the seismic signals, as recorded at a short-period station closest to the summit crater, changed considerably with time. Initially high RSAM and gas fluxes, with modest ash emissions, were associated with continuous and 'pulsed' tremor signals (amplitude modulated, with 30-100 second periods). As activity levels decreased over several days, tremor episodes became increasingly intermittent, and short-lived bursts of low-frequency earthquakes with quasiperiodic inter-event times were observed. Following one day of quiescence, the onset of pronounced low frequency drumbeat earthquakes signalled the resumption of elevated unrest, initially with mean inter-event times of 32 seconds, and later increasing to 74 seconds and longer, with periodicity progressively breaking down over several days. A reduction in RSAM was then followed by one week of persistent, quasiperiodic, longer-duration emergent low-frequency pulses, including

  20. Dynamic triggering potential of large earthquakes recorded by the EarthScope U.S. Transportable Array using a frequency domain detection method

    NASA Astrophysics Data System (ADS)

    Linville, L. M.; Pankow, K. L.; Kilb, D. L.; Velasco, A. A.; Hayward, C.

    2013-12-01

    Because of the abundance of data from the Earthscope U.S. Transportable Array (TA), data paucity and station sampling bias in the US are no longer significant obstacles to understanding some of the physical parameters driving dynamic triggering. Initial efforts to determine locations of dynamic triggering in the US following large earthquakes (M ≥ 8.0) during TA relied on a time domain detection algorithm which used an optimized short-term average to long-term average (STA/LTA) filter and resulted in an unmanageably large number of false positive detections. Specific site sensitivities and characteristic noise when coupled with changes in detection rates often resulted in misleading output. To navigate this problem, we develop a frequency domain detection algorithm that first pre-whitens each seismogram and then computes a broadband frequency stack of the data using a three hour time window beginning at the origin time of the mainshock. This method is successful because of the broadband nature of earthquake signals compared with the more band-limited high frequency picks that clutter results from time domain picking algorithms. Preferential band filtering of the frequency stack for individual events can further increase the accuracy and drive the detection threshold to below magnitude one, but at general cost to detection levels across large scale data sets. Of the 15 mainshocks studied, 12 show evidence of discrete spatial clusters of local earthquake activity occurring within the array during the mainshock coda. Most of this activity is in the Western US with notable sequences in Northwest Wyoming, Western Texas, Southern New Mexico and Western Montana. Repeat stations (associated with 2 or more mainshocks) are generally rare, but when occur do so exclusively in California and Nevada. Notably, two of the most prolific regions of seismicity following a single mainshock occur following the 2009 magnitude 8.1 Samoa (Sep 29, 2009, 17:48:10) event, in areas with few

  1. Playing against nature: improving earthquake hazard mitigation

    NASA Astrophysics Data System (ADS)

    Stein, S. A.; Stein, J.

    2012-12-01

    The great 2011 Tohoku earthquake dramatically demonstrated the need to improve earthquake and tsunami hazard assessment and mitigation policies. The earthquake was much larger than predicted by hazard models, and the resulting tsunami overtopped coastal defenses, causing more than 15,000 deaths and $210 billion damage. Hence if and how such defenses should be rebuilt is a challenging question, because the defences fared poorly and building ones to withstand tsunamis as large as March's is too expensive,. A similar issue arises along the Nankai Trough to the south, where new estimates warning of tsunamis 2-5 times higher than in previous models raise the question of what to do, given that the timescale on which such events may occur is unknown. Thus in the words of economist H. Hori, "What should we do in face of uncertainty? Some say we should spend our resources on present problems instead of wasting them on things whose results are uncertain. Others say we should prepare for future unknown disasters precisely because they are uncertain". Thus society needs strategies to mitigate earthquake and tsunami hazards that make economic and societal sense, given that our ability to assess these hazards is poor, as illustrated by highly destructive earthquakes that often occur in areas predicted by hazard maps to be relatively safe. Conceptually, we are playing a game against nature "of which we still don't know all the rules" (Lomnitz, 1989). Nature chooses tsunami heights or ground shaking, and society selects the strategy to minimize the total costs of damage plus mitigation costs. As in any game of chance, we maximize our expectation value by selecting the best strategy, given our limited ability to estimate the occurrence and effects of future events. We thus outline a framework to find the optimal level of mitigation by balancing its cost against the expected damages, recognizing the uncertainties in the hazard estimates. This framework illustrates the role of the

  2. Stress drops of induced and tectonic earthquakes in the central United States are indistinguishable

    PubMed Central

    Huang, Yihe; Ellsworth, William L.; Beroza, Gregory C.

    2017-01-01

    Induced earthquakes currently pose a significant hazard in the central United States, but there is considerable uncertainty about the severity of their ground motions. We measure stress drops of 39 moderate-magnitude induced and tectonic earthquakes in the central United States and eastern North America. Induced earthquakes, more than half of which are shallower than 5 km, show a comparable median stress drop to tectonic earthquakes in the central United States that are dominantly strike-slip but a lower median stress drop than that of tectonic earthquakes in the eastern North America that are dominantly reverse-faulting. This suggests that ground motion prediction equations developed for tectonic earthquakes can be applied to induced earthquakes if the effects of depth and faulting style are properly considered. Our observation leads to the notion that, similar to tectonic earthquakes, induced earthquakes are driven by tectonic stresses. PMID:28782040

  3. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  4. A review on remotely sensed land surface temperature anomaly as an earthquake precursor

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anshuman; Singh, Shaktiman; Sam, Lydia; Joshi, P. K.; Bhardwaj, Akanksha; Martín-Torres, F. Javier; Kumar, Rajesh

    2017-12-01

    The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.

  5. Temporal Activity Modulation of Deep Very Low Frequency Earthquakes in Shikoku, Southwest Japan

    NASA Astrophysics Data System (ADS)

    Baba, Satoru; Takeo, Akiko; Obara, Kazushige; Kato, Aitaro; Maeda, Takuto; Matsuzawa, Takanori

    2018-01-01

    We investigated long-term changes in the activity of deep very low frequency earthquakes (VLFEs) in western Shikoku, southwest part of the Nankai subduction zone in Japan for 13 years by the matched-filter technique. VLFE activity is expected to be a proxy of interplate slips. In the Bungo channel, where long-term slow slip events (SSEs) occurred frequently, the cumulative number of detected VLFEs increased rapidly in 2010 and 2014, which were modulated by long-term SSEs. In the neighboring inland region near the Bungo channel, the cumulative number increased steeply every 6 months. This stepwise change was accompanied by episodic tremors and slips. Deep VLFE activity in western Shikoku has been low since the latter half of 2014. This decade-scale quiescence may be attributed to the change in interplate coupling strength in the Nankai subduction zone.

  6. Long-term changes in regular and low-frequency earthquake inter-event times near Parkfield, CA

    NASA Astrophysics Data System (ADS)

    Wu, C.; Shelly, D. R.; Johnson, P. A.; Gomberg, J. S.; Peng, Z.

    2012-12-01

    The temporal evolution of earthquake inter-event time may provide important clues for the timing of future events and underlying physical mechanisms of earthquake nucleation. In this study, we examine inter-event times from 12-yr catalogs of ~50,000 earthquakes and ~730,000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault. We focus on the long-term evolution of inter-event times after the 2003 Mw6.5 San Simeon and 2004 Mw6.0 Parkfield earthquakes. We find that inter-event times decrease by ~4 orders of magnitudes after the Parkfield and San Simeon earthquakes and are followed by a long-term recovery with time scales of ~3 years and more than 8 years for earthquakes along and to the southwest of the San Andreas fault, respectively. The differing long-term recovery of the earthquake inter-event times is likely a manifestation of different aftershock recovery time scales that reflect the different tectonic loading rates in the two regions. We also observe a possible decrease of LFE inter-event times in some LFE families, followed by a recovery with time scales of ~4 months to several years. The drop in the recurrence time of LFE after the Parkfield earthquake is likely caused by a combination of the dynamic and positive static stress induced by the Parkfield earthquake, and the long-term recovery in LFE recurrence time could be due to post-seismic relaxation or gradual recovery of the fault zone material properties. Our on-going work includes better constraining and understanding the physical mechanisms responsible for the observed long-term recovery in earthquake and LFE inter-event times.

  7. Understanding dynamic friction through spontaneously evolving laboratory earthquakes

    PubMed Central

    Rubino, V.; Rosakis, A. J.; Lapusta, N.

    2017-01-01

    Friction plays a key role in how ruptures unzip faults in the Earth’s crust and release waves that cause destructive shaking. Yet dynamic friction evolution is one of the biggest uncertainties in earthquake science. Here we report on novel measurements of evolving local friction during spontaneously developing mini-earthquakes in the laboratory, enabled by our ultrahigh speed full-field imaging technique. The technique captures the evolution of displacements, velocities and stresses of dynamic ruptures, whose rupture speed range from sub-Rayleigh to supershear. The observed friction has complex evolution, featuring initial velocity strengthening followed by substantial velocity weakening. Our measurements are consistent with rate-and-state friction formulations supplemented with flash heating but not with widely used slip-weakening friction laws. This study develops a new approach for measuring local evolution of dynamic friction and has important implications for understanding earthquake hazard since laws governing frictional resistance of faults are vital ingredients in physically-based predictive models of the earthquake source. PMID:28660876

  8. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    NASA Astrophysics Data System (ADS)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification

  9. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas Fault

    USGS Publications Warehouse

    Shelly, David R.

    2010-01-01

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between ~3 and ~6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  10. Building Loss Estimation for Earthquake Insurance Pricing

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Erdik, M.; Sesetyan, K.; Demircioglu, M. B.; Fahjan, Y.; Siyahi, B.

    2005-12-01

    After the 1999 earthquakes in Turkey several changes in the insurance sector took place. A compulsory earthquake insurance scheme was introduced by the government. The reinsurance companies increased their rates. Some even supended operations in the market. And, most important, the insurance companies realized the importance of portfolio analysis in shaping their future market strategies. The paper describes an earthquake loss assessment methodology that can be used for insurance pricing and portfolio loss estimation that is based on our work esperience in the insurance market. The basic ingredients are probabilistic and deterministic regional site dependent earthquake hazard, regional building inventory (and/or portfolio), building vulnerabilities associated with typical construction systems in Turkey and estimations of building replacement costs for different damage levels. Probable maximum and average annualized losses are estimated as the result of analysis. There is a two-level earthquake insurance system in Turkey, the effect of which is incorporated in the algorithm: the national compulsory earthquake insurance scheme and the private earthquake insurance system. To buy private insurance one has to be covered by the national system, that has limited coverage. As a demonstration of the methodology we look at the case of Istanbul and use its building inventory data instead of a portfolio. A state-of-the-art time depent earthquake hazard model that portrays the increased earthquake expectancies in Istanbul is used. Intensity and spectral displacement based vulnerability relationships are incorporated in the analysis. In particular we look at the uncertainty in the loss estimations that arise from the vulnerability relationships, and at the effect of the implemented repair cost ratios.

  11. Rupture evolution of the 2006 Java tsunami earthquake and the possible role of splay faults

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; Bassett, Dan; Jiang, Junle; Shearer, Peter M.; Ji, Chen

    2017-11-01

    The 2006 Mw 7.8 Java earthquake was a tsunami earthquake, exhibiting frequency-dependent seismic radiation along strike. High-frequency global back-projection results suggest two distinct rupture stages. The first stage lasted ∼65 s with a rupture speed of ∼1.2 km/s, while the second stage lasted from ∼65 to 150 s with a rupture speed of ∼2.7 km/s. High-frequency radiators resolved with back-projection during the second stage spatially correlate with splay fault traces mapped from residual free-air gravity anomalies. These splay faults also colocate with a major tsunami source associated with the earthquake inferred from tsunami first-crest back-propagation simulation. These correlations suggest that the splay faults may have been reactivated during the Java earthquake, as has been proposed for other tsunamigenic earthquakes, such as the 1944 Mw 8.1 Tonankai earthquake in the Nankai Trough.

  12. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    USGS Publications Warehouse

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  13. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  14. Seismotectonic framework of the 2010 February 27 Mw 8.8 Maule, Chile earthquake sequence

    USGS Publications Warehouse

    Hayes, Gavin P.; Bergman, Eric; Johnson, Kendra J.; Benz, Harley M.; Brown, Lucy; Meltzer, Anne S.

    2013-01-01

    After the 2010 Mw 8.8 Maule earthquake, an international collaboration involving teams and instruments from Chile, the US, the UK, France and Germany established the International Maule Aftershock Deployment temporary network over the source region of the event to facilitate detailed, open-access studies of the aftershock sequence. Using data from the first 9-months of this deployment, we have analyzed the detailed spatial distribution of over 2500 well-recorded aftershocks. All earthquakes have been relocated using a hypocentral decomposition algorithm to study the details of and uncertainties in both their relative and absolute locations. We have computed regional moment tensor solutions for the largest of these events to produce a catalogue of 465 mechanisms, and have used all of these data to study the spatial distribution of the aftershock sequence with respect to the Chilean megathrust. We refine models of co-seismic slip distribution of the Maule earthquake, and show how small changes in fault geometries assumed in teleseismic finite fault modelling significantly improve fits to regional GPS data, implying that the accuracy of rapid teleseismic fault models can be substantially improved by consideration of existing fault geometry model databases. We interpret all of these data in an integrated seismotectonic framework for the Maule earthquake rupture and its aftershock sequence, and discuss the relationships between co-seismic rupture and aftershock distributions. While the majority of aftershocks are interplate thrust events located away from regions of maximum co-seismic slip, interesting clusters of aftershocks are identified in the lower plate at both ends of the main shock rupture, implying internal deformation of the slab in response to large slip on the plate boundary interface. We also perform Coulomb stress transfer calculations to compare aftershock locations and mechanisms to static stress changes following the Maule rupture. Without the

  15. CISN ShakeAlert: Accounting for site amplification effects and quantifying time and spatial dependence of uncertainty estimates in the Virtual Seismologist earthquake early warning algorithm

    NASA Astrophysics Data System (ADS)

    Caprio, M.; Cua, G. B.; Wiemer, S.; Fischer, M.; Heaton, T. H.; CISN EEW Team

    2011-12-01

    The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system being tested in real-time in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network (SCSN) since July 2008, and at the Northern California Seismic Network (NCSN) since February 2009. With the aim of improving the convergence of real-time VS magnitude estimates to network magnitudes, we evaluate various empirical and Vs30-based approaches to accounting for site amplification. Empirical station corrections for SCSN stations are derived from M>3.0 events from 2005 through 2009. We evaluate the performance of the various approaches using an independent 2010 dataset. In addition, we analyze real-time VS performance from 2008 to the present to quantify the time and spatial dependence of VS uncertainty estimates. We also summarize real-time VS performance for significant 2011 events in California. Improved magnitude and uncertainty estimates potentially increase the utility of EEW information for end-users, particularly those intending to automate damage-mitigating actions based on real-time information.

  16. Correlation between elastic energy density and deep earthquakes distribution

    NASA Astrophysics Data System (ADS)

    Gunawardana, P. M.; Morra, G.

    2017-05-01

    The mechanism at the origin of the earthquakes below 30 km remains elusive as these events cannot be explained by brittle frictional processes. In this work we focus on the global total distribution of earthquakes frequency vs. depth from ∼50 km to 670 km depth. We develop a numerical model of self-driven subduction by solving the non-homogeneous Stokes equation using the ;Particle in cell method; in combination with a conservative finite difference scheme, here solved for the first time using Python and NumPy only. We show that most of the elastic energy is stored in the slab core and that it is strongly correlated with the earthquake frequency-depth distribution for a wide range of lithosphere and lithosphere-core viscosities. According to our results, we suggest that 1) slab bending at the bottom of the upper mantle causes the peak of the earthquake frequency-depth distribution that is observed at mantle transition depth; 2) the presence of a high viscous stiff core inside the lithosphere generates an elastic energy distribution that fits better with the exponential decay that is observed at intermediate depth.

  17. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    NASA Astrophysics Data System (ADS)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  18. A Model for Low-Frequency Earthquake Slip in Cascadia

    NASA Astrophysics Data System (ADS)

    Chestler, S.; Creager, K.

    2017-12-01

    Low-Frequency Earthquakes (LFEs) are commonly used to identify when and where slow slip occurred, especially for slow slip events that are too small to be observed geodetically. Yet, an understanding of how slip occurs within an LFE family patch, or patch on the plate interface where LFEs repeat, is limited. How much slip occurs per LFE and over what area? Do all LFEs within an LFE family rupture the exact same spot? To answer these questions, we implement a catalog of 39,966 LFEs, sorted into 45 LFE families, beneath the Olympic Peninsula, WA. LFEs were detected and located using data from approximately 100 3-component stations from the Array of Arrays experiment. We compare the LFE family patch area to the area within the LFE family patch that slips through LFEs during Cascadia Episodic Tremor and Slip (ETS) events. Patch area is calculated from relative LFE locations, solved for using the double difference method. Slip area is calculated from the characteristic moment (mean of the exponential moment-frequency distribution) and number LFEs for each family and geodetically measured ETS slip. We find that 0.5-5% of the area within an LFE family patch slips through LFEs. The rest must deform in some other manner (e.g., ductile deformation). We also explore LFE slip patterns throughout the entire slow slip zone. Is LFE slip uniform? Does LFE slip account for all geodetically observed slow slip? Double difference relocations reveal that LFE families are 2 km patches where LFE are clustered close together. Additionally, there are clusters of LFE families with diameters of 4-15 km. There are gaps with no observable, repeating LFEs between LFE families in clusters and between clusters of LFE families. Based on this observation, we present a model where LFE slip is heterogeneous on multiple spatial scales. Clusters of LFE families may represent patches with higher strength than the surrounding areas. Finally, we find that LFE slip only accounts for a small fraction ( 0

  19. Hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis method for mid-frequency analysis of built-up systems with epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan

    2017-09-01

    Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.

  20. NGA West 2 | Pacific Earthquake Engineering Research Center

    Science.gov Websites

    , multi-year research program to improve Next Generation Attenuation models for active tectonic regions earthquake engineering, including modeling of directivity and directionality; verification of NGA-West models epistemic uncertainty; and evaluation of soil amplification factors in NGA models versus NEHRP site factors

  1. Effect of Fault Parameter Uncertainties on PSHA explored by Monte Carlo Simulations: A case study for southern Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Akinci, A.; Pace, B.

    2017-12-01

    In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of

  2. Comparison of hypocentre parameters of earthquakes in the Aegean region

    NASA Astrophysics Data System (ADS)

    Özel, Nurcan M.; Shapira, Avi; Harris, James

    2007-06-01

    The Aegean Sea is one of the more seismically active areas in the Euro-Mediterranean region. The seismic activity in the Aegean Sea is monitored by a number of local agencies that contribute their data to the International Seismological Centre (ISC). Consequently, the ISC Bulletin may serve as a reliable reference for assessing the capabilities of local agencies to monitor moderate and low magnitude earthquakes. We have compared bulletins of the Kandilli Observatory and Earthquake Research Institute (KOERI) and the ISC, for the period 1976-2003 that comprises the most complete data sets for both KOERI and ISC. The selected study area is the East Aegean Sea and West Turkey, bounded by latitude 35-41°N and by longitude 24-29°E. The total number of events known to occur in this area, during 1976-2003 is about 41,638. Seventy-two percent of those earthquakes were located by ISC and 75% were located by KOERI. As expected, epicentre location discrepancy between ISC and KOERI solutions are larger as we move away from the KOERI seismic network. Out of the 22,066 earthquakes located by both ISC and KOERI, only 4% show a difference of 50 km or more. About 140 earthquakes show a discrepancy of more than 100 km. Focal Depth determinations differ mainly in the subduction zone along the Hellenic arc. Less than 2% of the events differ in their focal depth by more than 25 km. Yet, the location solutions of about 30 events differ by more than 100 km. Almost a quarter of the events listed in the ISC Bulletin are missed by KOERI, most of them occurring off the coast of Turkey, in the East Aegean. Based on the frequency-magnitude distributions, the KOERI Bulletin is complete for earthquakes with duration magnitudes Md > 2.7 (both located and assigned magnitudes) where as the threshold magnitude for events with location and magnitude determinations by ISC is mb > 4.0. KOERI magnitudes seem to be poorly correlated with ISC magnitudes suggesting relatively high uncertainty in the

  3. Source Spectra and Site Response for Two Indonesian Earthquakes: the Tasikmalaya and Kerinci Events of 2009

    NASA Astrophysics Data System (ADS)

    Gunawan, I.; Cummins, P. R.; Ghasemi, H.; Suhardjono, S.

    2012-12-01

    Indonesia is very prone to natural disasters, especially earthquakes, due to its location in a tectonically active region. In September-October 2009 alone, intraslab and crustal earthquakes caused the deaths of thousands of people, severe infrastructure destruction and considerable economic loss. Thus, both intraslab and crustal earthquakes are important sources of earthquake hazard in Indonesia. Analysis of response spectra for these intraslab and crustal earthquakes are needed to yield more detail about earthquake properties. For both types of earthquakes, we have analysed available Indonesian seismic waveform data to constrain source and path parameters - i.e., low frequency spectral level, Q, and corner frequency - at reference stations that appear to be little influenced by site response.. We have considered these analyses for the main shocks as well as several aftershocks. We obtain corner frequencies that are reasonably consistent with the constant stress drop hypothesis. Using these results, we consider using them to extract information about site response form other stations form the Indonesian strong motion network that appear to be strongly affected by site response. Such site response data, as well as earthquake source parameters, are important for assessing earthquake hazard in Indonesia.

  4. A spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an operational earthquake forecast

    USGS Publications Warehouse

    Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.

    2017-01-01

    We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as

  5. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  6. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  7. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  8. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  9. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  10. Evidence for tidal triggering on the earthquakes of the Hellenic Arc, Greece

    NASA Astrophysics Data System (ADS)

    Vergos, G.; Arabelos, D. N.; Contadakis, M. E.

    2015-12-01

    In this paper we investigate the tidal triggering evidence on the earthquakes of the seismic area of the Hellenic Arc using the Hist(ogram)Cum(mulation) method. We analyze the series of the earthquakes occurred in the area which is confined by the longitudes 22° and 28°E and latitudes 34° and 36°N in the time period from 1964 to 2012. In this time period 16,137 shallow and of intermediate depth earthquakes with ML up to 6.0 and 1,482 deep earthquakes with ML up to 6.2 occurred. The result of the this analysis indicate that the monthly variation of the frequencies of earthquake occurrence is in accordance with the period of the tidal lunar monthly variations, and the same happens with the corresponding daily variations of the frequencies of earthquake occurrence with the diurnal luni-solar (K1) and semidiurnal solar (S2) tidal variations. These results are in favor of a tidal triggering process on earthquakes when the stress in the focal area is near the critical level.

  11. Investigation of the relationship between ionospheric foF2 and earthquakes

    NASA Astrophysics Data System (ADS)

    Karaboga, Tuba; Canyilmaz, Murat; Ozcan, Osman

    2018-04-01

    Variations of the ionospheric F2 region critical frequency (foF2) have been investigated statistically before earthquakes during 1980-2008 periods in Japan area. Ionosonde data was taken from Kokubunji station which is in the earthquake preparation zone for all earthquakes. Standard Deviations and Inter-Quartile Range methods are applied to the foF2 data. It is observed that there are anomalous variations in foF2 before earthquakes. These variations can be regarded as ionospheric precursors and may be used for earthquake prediction.

  12. Testing and comparison of three frequency-based magnitude estimating parameters for earthquake early warning based events in the Yunnan region, China in 2014

    NASA Astrophysics Data System (ADS)

    Zhang, Jianjing; Li, Hongjie

    2018-06-01

    To mitigate potential seismic disasters in the Yunnan region, China, building up suitable magnitude estimation scaling laws for an earthquake early warning system (EEWS) is in high demand. In this paper, the records from the main and after-shocks of the Yingjiang earthquake (M W 5.9), the Ludian earthquake (M W 6.2) and the Jinggu earthquake (M W 6.1), which occurred in Yunnan in 2014, were used to develop three estimators, including the maximum of the predominant period ({{τ }{{p}}}\\max ), the characteristic period (τ c) and the log-average period (τ log), for estimating earthquake magnitude. The correlations between these three frequency-based parameters and catalog magnitudes were developed, compared and evaluated against previous studies. The amplitude and period of seismic waves might be amplified in the Ludian mountain-canyon area by multiple reflections and resonance, leading to excessive values of the calculated parameters, which are consistent with Sichuan’s scaling. As a result, τ log was best correlated with magnitude and τ c had the highest slope of regression equation, while {{τ }{{p}}}\\max performed worst with large scatter and less sensitivity for the change of magnitude. No evident saturation occurred in the case of M 6.1 and M 6.2 in this study. Even though both τ c and τ log performed similarly and can well reflect the size of the Earthquake, τ log has slightly fewer prediction errors for small scale earthquakes (M ≤ 4.5), which was also observed by previous research. Our work offers an insight into the feasibility of a EEWS in Yunnan, China, and this study shows that it is necessary to build up an appropriate scaling law suitable for the warning region.

  13. Source discrimination between Mining blasts and Earthquakes in Tianshan orogenic belt, NW China

    NASA Astrophysics Data System (ADS)

    Tang, L.; Zhang, M.; Wen, L.

    2017-12-01

    In recent years, a large number of quarry blasts have been detonated in Tianshan Mountains of China. It is necessary to discriminate those non-earthquake records from the earthquake catalogs in order to determine the real seismicity of the region. In this study, we have investigated spectral ratios and amplitude ratios as discriminants for regional seismic-event identification using explosions and earthquakes recorded at Xinjiang Seismic Network (XJSN) of China. We used a data set that includes 1071 earthquakes and 2881 non-earthquakes as training data recorded by the XJSN between years of 2009 and 2016, with both types of events in a comparable local magnitude range (1.5 to 2.9). The non-earthquake and earthquake groups were well separated by amplitude ratios of Pg/Sg, with the separation increasing with frequency when averaged over three stations. The 8- to 15-Hz Pg/Sg ratio was proved to be the most precise and accurate discriminant, which works for more than 90% of the events. In contrast, the P spectral ratio performed considerably worse with a significant overlap (about 60% overlap) between the earthquake and explosion populations. The comparison results show amplitude ratios between compressional and shear waves discriminate better than low-frequency to high-frequency spectral ratios for individual phases. In discriminating between explosions and earthquakes, none of two discriminants were able to completely separate the two populations of events. However, a joint discrimination scheme employing simple majority voting reduces misclassifications to 10%. In the region of the study, 44% of the examined seismic events were determined to be non-earthquakes and 55% to be earthquakes. The earthquakes occurring on land are related to small faults, while the blasts are concentrated in large quarries.

  14. Spatiotemporal variations in the b-value of earthquake magnitude-frequency distributions: Classification and causes

    NASA Astrophysics Data System (ADS)

    El-Isa, Z. H.; Eaton, David W.

    2014-03-01

    Interpretation of the b-value of earthquake frequency-magnitude distributions has received considerable attention in recent decades. This paper provides a comprehensive review of previous investigations of spatial and temporal variations in b-value, including their classification and possible causes. Based on least-squares regression of seismicity data compiled from the NEIC, IRIS and ISC catalogs, we find an average value of 1.02 ± 0.03 for the whole Earth and its two hemispheres, consistent with the general view that in seismically active regions the long-term average value is close to unity. Nevertheless, wide-ranging b-variations (0.3 ≤ b ≤ 2.5) have been reported in the literature. This variability has been interpreted to arise from one or more of the following factors: prevailing stress state, crustal heterogeneity, focal depth, pore pressure, geothermal gradient, tectonic setting, petrological/environmental/geophysical characteristics, clustering of events, incomplete catalog data, and/or method of calculation. Excluding the latter, all of these factors appear to be linked, directly or indirectly, with the effective state of stress. Although time-dependent changes in b-value are well documented, conflicting observations reveal either a precursory increase or decrease in b value before major earthquakes. Our compilation of published analyses suggests that statistically significant b-variations occur globally on various timescales, including annual, monthly and perhaps diurnal. Taken together, our review suggests that b-variations are most plausibly linked with changes in effective stress.

  15. Earthquake Scaling Relations

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Boettcher, M.; Richardson, E.

    2002-12-01

    Using scaling relations to understand nonlinear geosystems has been an enduring theme of Don Turcotte's research. In particular, his studies of scaling in active fault systems have led to a series of insights about the underlying physics of earthquakes. This presentation will review some recent progress in developing scaling relations for several key aspects of earthquake behavior, including the inner and outer scales of dynamic fault rupture and the energetics of the rupture process. The proximate observations of mining-induced, friction-controlled events obtained from in-mine seismic networks have revealed a lower seismicity cutoff at a seismic moment Mmin near 109 Nm and a corresponding upper frequency cutoff near 200 Hz, which we interpret in terms of a critical slip distance for frictional drop of about 10-4 m. Above this cutoff, the apparent stress scales as M1/6 up to magnitudes of 4-5, consistent with other near-source studies in this magnitude range (see special session S07, this meeting). Such a relationship suggests a damage model in which apparent fracture energy scales with the stress intensity factor at the crack tip. Under the assumption of constant stress drop, this model implies an increase in rupture velocity with seismic moment, which successfully predicts the observed variation in corner frequency and maximum particle velocity. Global observations of oceanic transform faults (OTFs) allow us to investigate a situation where the outer scale of earthquake size may be controlled by dynamics (as opposed to geologic heterogeneity). The seismicity data imply that the effective area for OTF moment release, AE, depends on the thermal state of the fault but is otherwise independent of fault's average slip rate; i.e., AE ~ AT, where AT is the area above a reference isotherm. The data are consistent with β = 1/2 below an upper cutoff moment Mmax that increases with AT and yield the interesting scaling relation Amax ~ AT1/2. Taken together, the OTF

  16. Infectious disease frequency among evacuees at shelters after the great eastern Japan earthquake and tsunami: a retrospective study.

    PubMed

    Kawano, Takahisa; Hasegawa, Kohei; Watase, Hiroko; Morita, Hiroshi; Yamamura, Osamu

    2014-02-01

    After the Great Eastern Japan Earthquake and tsunami, the World Health Organization cautioned that evacuees at shelters would be at increased risk of infectious disease transmission; however, the frequency that occurred in this population was not known. We reviewed medical charts of evacuees who visited medical clinics at 6 shelters from March 19, to April 8, 2011. Excluded were patients who did not reside within the shelters or whose medical records lacked a name or date. We investigated the frequency of and cumulative incidences of acute respiratory infection [ARI], acute gastroenteritis, acute jaundice syndrome, scabies, measles, pertussis, and tetanus. Of 1364 patients who visited 6 shelter clinics, 1167 patients (86.1%) were eligible for the study. The median total number of evacuees was 2545 (interquartile range [IQR], 2277-3009). ARI was the most common infectious disease; the median number of patients with ARI was 168.8 per week per 1000 evacuees (IQR, 64.5-186.1). Acute gastroenteritis was the second most common; the median number of patients was 23.7 per week per 1000 evacuees (IQR, 5.1-24.3). No other infectious diseases were observed. The median cumulative incidence of ARI per 1000 evacuees in each shelter was 13.1 person-days (IQR, 8.5-18.8). The median cumulative incidence of gastroenteritis was 1.6 person-days (IQR, 0.3-3.4). After the Great Eastern Japan Earthquake and tsunami, outbreaks of ARI and acute gastroenteritis occurred in evacuation shelters.

  17. Thermal Infrared Anomalies of Several Strong Earthquakes

    PubMed Central

    Wei, Congxin; Guo, Xiao; Qin, Manzhong

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of “time-frequency relative power spectrum.” (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  18. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  19. Uncertainties in evaluation of hazard and seismic risk

    NASA Astrophysics Data System (ADS)

    Marmureanu, Gheorghe; Marmureanu, Alexandru; Ortanza Cioflan, Carmen; Manea, Elena-Florinela

    2015-04-01

    Two methods are commonly used for seismic hazard assessment: probabilistic (PSHA) and deterministic(DSHA) seismic hazard analysis.Selection of a ground motion for engineering design requires a clear understanding of seismic hazard and risk among stakeholders, seismologists and engineers. What is wrong with traditional PSHA or DSHA ? PSHA common used in engineering is using four assumptions developed by Cornell in 1968:(1)-Constant-in-time average occurrence rate of earthquakes; (2)-Single point source; (3).Variability of ground motion at a site is independent;(4)-Poisson(or "memory - less") behavior of earthquake occurrences. It is a probabilistic method and "when the causality dies, its place is taken by probability, prestigious term meant to define the inability of us to predict the course of nature"(Nils Bohr). DSHA method was used for the original design of Fukushima Daichii, but Japanese authorities moved to probabilistic assessment methods and the probability of exceeding of the design basis acceleration was expected to be 10-4-10-6 . It was exceeded and it was a violation of the principles of deterministic hazard analysis (ignoring historical events)(Klügel,J,U, EGU,2014, ISSO). PSHA was developed from mathematical statistics and is not based on earthquake science(invalid physical models- point source and Poisson distribution; invalid mathematics; misinterpretation of annual probability of exceeding or return period etc.) and become a pure numerical "creation" (Wang, PAGEOPH.168(2011),11-25). An uncertainty which is a key component for seismic hazard assessment including both PSHA and DSHA is the ground motion attenuation relationship or the so-called ground motion prediction equation (GMPE) which describes a relationship between a ground motion parameter (i.e., PGA,MMI etc.), earthquake magnitude M, source to site distance R, and an uncertainty. So far, no one is taking into consideration strong nonlinear behavior of soils during of strong earthquakes. But

  20. Earthquakes drive focused denudation along a tectonically active mountain front

    NASA Astrophysics Data System (ADS)

    Li, Gen; West, A. Joshua; Densmore, Alexander L.; Jin, Zhangdong; Zhang, Fei; Wang, Jin; Clark, Marin; Hilton, Robert G.

    2017-08-01

    Earthquakes cause widespread landslides that can increase erosional fluxes observed over years to decades. However, the impact of earthquakes on denudation over the longer timescales relevant to orogenic evolution remains elusive. Here we assess erosion associated with earthquake-triggered landslides in the Longmen Shan range at the eastern margin of the Tibetan Plateau. We use the Mw 7.9 2008 Wenchuan and Mw 6.6 2013 Lushan earthquakes to evaluate how seismicity contributes to the erosional budget from short timescales (annual to decadal, as recorded by sediment fluxes) to long timescales (kyr to Myr, from cosmogenic nuclides and low temperature thermochronology). Over this wide range of timescales, the highest rates of denudation in the Longmen Shan coincide spatially with the region of most intense landsliding during the Wenchuan earthquake. Across sixteen gauged river catchments, sediment flux-derived denudation rates following the Wenchuan earthquake are closely correlated with seismic ground motion and the associated volume of Wenchuan-triggered landslides (r2 > 0.6), and to a lesser extent with the frequency of high intensity runoff events (r2 = 0.36). To assess whether earthquake-induced landsliding can contribute importantly to denudation over longer timescales, we model the total volume of landslides triggered by earthquakes of various magnitudes over multiple earthquake cycles. We combine models that predict the volumes of landslides triggered by earthquakes, calibrated against the Wenchuan and Lushan events, with an earthquake magnitude-frequency distribution. The long-term, landslide-sustained "seismic erosion rate" is similar in magnitude to regional long-term denudation rates (∼0.5-1 mm yr-1). The similar magnitude and spatial coincidence suggest that earthquake-triggered landslides are a primary mechanism of long-term denudation in the frontal Longmen Shan. We propose that the location and intensity of seismogenic faulting can contribute to

  1. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    DTIC Science & Technology

    2008-09-01

    explosions from earthquakes, using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites ...Battone et al., 2002). For example, in Figure 1 we compare an earthquake and an explosion at each of four major test sites (rows), bandpass filtered...explosions as the frequency increases. Note also there are interesting differences between the test sites , indicating that emplacement conditions (depth

  2. Analysis of Landslides Triggered by October 2005, Kashmir Earthquake

    PubMed Central

    Mahmood, Irfan; Qureshi, Shahid Nadeem; Tariq, Shahina; Atique, Luqman; Iqbal, Muhammad Farooq

    2015-01-01

    Introduction: The October 2005, Kashmir earthquake main event was triggered along the Balakot-Bagh Fault which runs from Bagh to Balakot, and caused more damages in and around these areas. Major landslides were activated during and after the earthquake inflicting large damages in the area, both in terms of infrastructure and casualties. These landslides were mainly attributed to the minimum threshold of the earthquake, geology of the area, climatologic and geomorphologic conditions, mudflows, widening of the roads without stability assessment, and heavy rainfall after the earthquake. These landslides were mainly rock and debris falls. Hattian Bala rock avalanche was largest landslide associated with the earthquake which completely destroyed a village and blocked the valley creating a lake. Discussion: The present study shows that the fault rupture and fault geometry have direct influence on the distribution of landslides and that along the rupture zone a high frequency band of landslides was triggered. There was an increase in number of landslides due to 2005 earthquake and its aftershocks and that most of earthquakes have occurred along faults, rivers and roads. It is observed that the stability of landslide mass is greatly influenced by amplitude, frequency and duration of earthquake induced ground motion. Most of the slope failures along the roads resulted from the alteration of these slopes during widening of the roads, and seepages during the rainy season immediately after the earthquake. Conclusion: Landslides occurred mostly along weakly cemented and indurated rocks, colluvial sand and cemented soils. It is also worth noting that fissures and ground crack which were induced by main and after shock are still present and they pose a major potential threat for future landslides in case of another earthquake activity or under extreme weather conditions. PMID:26366324

  3. Analysis of Landslides Triggered by October 2005, Kashmir Earthquake.

    PubMed

    Mahmood, Irfan; Qureshi, Shahid Nadeem; Tariq, Shahina; Atique, Luqman; Iqbal, Muhammad Farooq

    2015-08-26

    The October 2005, Kashmir earthquake main event was triggered along the Balakot-Bagh Fault which runs from Bagh to Balakot, and caused more damages in and around these areas. Major landslides were activated during and after the earthquake inflicting large damages in the area, both in terms of infrastructure and casualties. These landslides were mainly attributed to the minimum threshold of the earthquake, geology of the area, climatologic and geomorphologic conditions, mudflows, widening of the roads without stability assessment, and heavy rainfall after the earthquake. These landslides were mainly rock and debris falls. Hattian Bala rock avalanche was largest landslide associated with the earthquake which completely destroyed a village and blocked the valley creating a lake. The present study shows that the fault rupture and fault geometry have direct influence on the distribution of landslides and that along the rupture zone a high frequency band of landslides was triggered. There was an increase in number of landslides due to 2005 earthquake and its aftershocks and that most of earthquakes have occurred along faults, rivers and roads. It is observed that the stability of landslide mass is greatly influenced by amplitude, frequency and duration of earthquake induced ground motion. Most of the slope failures along the roads resulted from the alteration of these slopes during widening of the roads, and seepages during the rainy season immediately after the earthquake.  Landslides occurred mostly along weakly cemented and indurated rocks, colluvial sand and cemented soils. It is also worth noting that fissures and ground crack which were induced by main and after shock are still present and they pose a major potential threat for future landslides in case of another earthquake activity or under extreme weather conditions.

  4. Dual Megathrust Slip Behaviors of the 2014 Iquique Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Meng, L.; Huang, H.; Burgmann, R.; Ampuero, J. P.; Strader, A. E.

    2014-12-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A M 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of Northern Chile. This event was preceded by a 2-week-long foreshock sequence including a M 6.7 earthquake. Repeating earthquakes are found among the foreshock sequence that migrated towards the mainshock area, suggesting a large scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence time of repeating earthquakes highlights the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while other repeaters occurred both before and after the mainshock in the area complementary to the mainshock rupture. The spatial and temporal distribution of the repeating earthquakes illustrate the essential role of propagating aseismic slip in leading up to the mainshock and aftershock activities. Various finite fault models indicate that the coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show slow initiation with low amplitude moment rate at low frequency (< 0.1 Hz), while back-projection shows a steady initiation at high frequency (> 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the high-frequency rupture remains within an area of low gravity anomaly, suggesting possible upper-crustal structures that promote high-frequency generation. Back-projection also shows an episode of reverse rupture propagation which suggests a delayed failure of asperities in

  5. Signals of ENPEMF Used in Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Hao, G.; Dong, H.; Zeng, Z.; Wu, G.; Zabrodin, S. M.

    2012-12-01

    The signals of Earth's natural pulse electromagnetic field (ENPEMF) is a combination of the abnormal crustal magnetic field pulse affected by the earthquake, the induced field of earth's endogenous magnetic field, the induced magnetic field of the exogenous variation magnetic field, geomagnetic pulsation disturbance and other energy coupling process between sun and earth. As an instantaneous disturbance of the variation field of natural geomagnetism, ENPEMF can be used to predict earthquakes. This theory was introduced by A.A Vorobyov, who expressed a hypothesis that pulses can arise not only in the atmosphere but within the Earth's crust due to processes of tectonic-to-electric energy conversion (Vorobyov, 1970; Vorobyov, 1979). The global field time scale of ENPEMF signals has specific stability. Although the wave curves may not overlap completely at different regions, the smoothed diurnal ENPEMF patterns always exhibit the same trend per month. The feature is a good reference for observing the abnormalities of the Earth's natural magnetic field in a specific region. The frequencies of the ENPEMF signals generally locate in kilo Hz range, where frequencies within 5-25 kilo Hz range can be applied to monitor earthquakes. In Wuhan, the best observation frequency is 14.5 kilo Hz. Two special devices are placed in accordance with the S-N and W-E direction. Dramatic variation from the comparison between the pulses waveform obtained from the instruments and the normal reference envelope diagram should indicate high possibility of earthquake. The proposed detection method of earthquake based on ENPEMF can improve the geodynamic monitoring effect and can enrich earthquake prediction methods. We suggest the prospective further researches are about on the exact sources composition of ENPEMF signals, the distinction between noise and useful signals, and the effect of the Earth's gravity tide and solid tidal wave. This method may also provide a promising application in

  6. Sensitivity of Coulomb stress changes to slip models of source faults: A case study for the 2011 Mw 9.0 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Wang, J.; Xu, C.; Furlong, K.; Zhong, B.; Xiao, Z.; Yi, L.; Chen, T.

    2017-12-01

    Although Coulomb stress changes induced by earthquake events have been used to quantify stress transfers and to retrospectively explain stress triggering among earthquake sequences, realistic reliable prospective earthquake forecasting remains scarce. To generate a robust Coulomb stress map for earthquake forecasting, uncertainties in Coulomb stress changes associated with the source fault, receiver fault and friction coefficient and Skempton's coefficient need to be exhaustively considered. In this paper, we specifically explore the uncertainty in slip models of the source fault of the 2011 Mw 9.0 Tohoku-oki earthquake as a case study. This earthquake was chosen because of its wealth of finite-fault slip models. Based on the wealth of those slip models, we compute the coseismic Coulomb stress changes induced by this mainshock. Our results indicate that nearby Coulomb stress changes for each slip model can be quite different, both for the Coulomb stress map at a given depth and on the Pacific subducting slab. The triggering rates for three months of aftershocks of the mainshock, with and without considering the uncertainty in slip models, differ significantly, decreasing from 70% to 18%. Reliable Coulomb stress changes in the three seismogenic zones of Nanki, Tonankai and Tokai are insignificant, approximately only 0.04 bar. By contrast, the portions of the Pacific subducting slab at a depth of 80 km and beneath Tokyo received a positive Coulomb stress change of approximately 0.2 bar. The standard errors of the seismicity rate and earthquake probability based on the Coulomb rate-and-state model (CRS) decay much faster with elapsed time in stress triggering zones than in stress shadows, meaning that the uncertainties in Coulomb stress changes in stress triggering zones would not drastically affect assessments of the seismicity rate and earthquake probability based on the CRS in the intermediate to long term.

  7. Constraints on the long-period moment-dip tradeoff for the Tohoku earthquake

    USGS Publications Warehouse

    Tsai, Victor C.; Hayes, Gavin P.; Duputel, Zacharie

    2011-01-01

    Since the work of Kanamori and Given (1981), it has been recognized that shallow, pure dip-slip earthquakes excite long-period surface waves such that it is difficult to independently constrain the moment (M0) and the dip (δ) of the source mechanism, with only the product M0 sin(2δ) being well constrained. Because of this, it is often assumed that the primary discrepancies between the moments of shallow, thrust earthquakes are due to this moment-dip tradeoff. In this work, we quantify how severe this moment-dip tradeoff is depending on the depth of the earthquake, the station distribution, the closeness of the mechanism to pure dip-slip, and the quality of the data. We find that both long-period Rayleigh and Love wave modes have moment-dip resolving power even for shallow events, especially when stations are close to certain azimuths with respect to mechanism strike and when source depth is well determined. We apply these results to USGS W phase inversions of the recent M9.0 Tohoku, Japan earthquake and estimate the likely uncertainties in dip and moment associated with the moment- dip tradeoff. After discussing some of the important sources of moment and dip error, we suggest two methods for potentially improving this uncertainty.

  8. Application of Phase-Weighted Stacking to Low-Frequency Earthquakes near the Alpine Fault, Central Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Baratin, L. M.; Townend, J.; Chamberlain, C. J.; Savage, M. K.

    2015-12-01

    Characterising seismicity in the vicinity of the Alpine Fault, a major transform boundary late in its typical earthquake cycle, may provide constraints on the state of stress preceding a large earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault toward an anticipated major rupture. We work with a continuous seismic dataset collected between 2009 and 2012 from a network of short-period seismometers, the Southern Alps Microearthquake Borehole Array (SAMBA). Fourteen primary LFE templates have been used to scan the dataset using a matched-filter technique based on an iterative cross-correlation routine. This method allows the detection of similar signals and establishes LFE families with common hypocenter locations. The detections are then combined for each LFE family using phase-weighted stacking (Thurber et al., 2014) to produce a signal with the highest possible signal to noise ratio. We find this method to be successful in increasing the number of LFE detections by roughly 10% in comparison with linear stacking. Our next step is to manually pick polarities on first arrivals of the phase-weighted stacked signals and compute preliminary locations. We are working to estimate LFE focal mechanism parameters and refine the focal mechanism solutions using an amplitude ratio technique applied to the linear stacks. LFE focal mechanisms should provide new insight into the geometry and rheology of the Alpine Fault and the stress field prevailing in the central Southern Alps.

  9. Fractals and Forecasting in Earthquakes and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  10. Dynamic strains for earthquake source characterization

    USGS Publications Warehouse

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  11. Continuous-cyclic variations in the b-value of the earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    El-Isa, Z. H.

    2013-10-01

    Seismicity of the Earth ( M ≥ 4.5) was compiled from NEIC, IRIS and ISC catalogues and used to compute b-value based on various time windows. It is found that continuous cyclic b-variations occur on both long and short time scales, the latter being of much higher value and sometimes in excess of 0.7 of the absolute b-value. These variations occur not only yearly or monthly, but also daily. Before the occurrence of large earthquakes, b-values start increasing with variable gradients that are affected by foreshocks. In some cases, the gradient is reduced to zero or to a negative value a few days before the earthquake occurrence. In general, calculated b-values attain maxima 1 day before large earthquakes and minima soon after their occurrence. Both linear regression and maximum likelihood methods give correlatable, but variable results. It is found that an expanding time window technique from a fixed starting point is more effective in the study of b-variations. The calculated b-variations for the whole Earth, its hemispheres, quadrants and the epicentral regions of some large earthquakes are of both local and regional character, which may indicate that in such cases, the geodynamic processes acting within a certain region have a much regional effect within the Earth. The b-variations have long been known to vary with a number of local and regional factors including tectonic stresses. The results reported here indicate that geotectonic stress remains the most significant factor that controls b-variations. It is found that for earthquakes with M w ≥ 7, an increase of about 0.20 in the b-value implies a stress increase that will result in an earthquake with a magnitude one unit higher.

  12. What Can Sounds Tell Us About Earthquake Interactions?

    NASA Astrophysics Data System (ADS)

    Aiken, C.; Peng, Z.

    2012-12-01

    It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these

  13. Revisiting the 2004 Sumatra-Andaman earthquake in a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Bletery, Q.; Sladen, A.; Jiang, J.; Simons, M.

    2015-12-01

    The 2004 Mw 9.25 Sumatra-Andaman earthquake is the largest seismic event of the modern instrumental era. Despite considerable effort to analyze the characteristics of its rupture, the different available observations have proven difficult to simultaneously integrate jointly into a finite-fault slip model. In particular, the critical near-field geodetic records contain variable and significant post-seismic signal (between 2 weeks and 2 months) while the satellite altimetry records of the associated tsunami are affected by various sources of uncertainties (e.g. source rupture velocity, meso-scale oceanic currents). In this study, we investigate the quasi-static slip distribution of the Sumatra-Andaman earthquake by carefully accounting for the different sources of uncertainties in the joint inversion of an extended set of geodetic and tsunami data. To do so, we use non-diagonal covariance matrices reflecting both data and model uncertainties in a fully Bayesian inversion framework. As model errors are particularly large for mega-earthquakes, we also rely on advanced simulation codes (normal mode theory on a layered spherical Earth for the static displacement field and non-hydrostatic equations for the tsunami) and account for the 3D curvature of the megathrust interface to reduce the associated epistemic uncertainties. The fully Bayesian inversion framework then enables us to derive the families of possible models compatible with the unevenly distributed and sometimes ambiguous measurements. We find two regions of high slip at latitudes 3°-4°N and 7°-8°N with amplitudes that probably reached values as large as 40 m and possibly larger. Such amounts of slip were not proposed by previous studies, which might have been biased by smoothing regularizations. We also find significant slip (around 20 m) offshore Andaman islands absent in earlier studies. Furthermore, we find that the rupture very likely involved shallow slip, with the possibility of reaching the trench.

  14. Comparison of Observed Spatio-temporal Aftershock Patterns with Earthquake Simulator Results

    NASA Astrophysics Data System (ADS)

    Kroll, K.; Richards-Dinger, K. B.; Dieterich, J. H.

    2013-12-01

    Due to the complex nature of faulting in southern California, knowledge of rupture behavior near fault step-overs is of critical importance to properly quantify and mitigate seismic hazards. Estimates of earthquake probability are complicated by the uncertainty that a rupture will stop at or jump a fault step-over, which affects both the magnitude and frequency of occurrence of earthquakes. In recent years, earthquake simulators and dynamic rupture models have begun to address the effects of complex fault geometries on earthquake ground motions and rupture propagation. Early models incorporated vertical faults with highly simplified geometries. Many current studies examine the effects of varied fault geometry, fault step-overs, and fault bends on rupture patterns; however, these works are limited by the small numbers of integrated fault segments and simplified orientations. The previous work of Kroll et al., 2013 on the northern extent of the 2010 El Mayor-Cucapah rupture in the Yuha Desert region uses precise aftershock relocations to show an area of complex conjugate faulting within the step-over region between the Elsinore and Laguna Salada faults. Here, we employ an innovative approach of incorporating this fine-scale fault structure defined through seismological, geologic and geodetic means in the physics-based earthquake simulator, RSQSim, to explore the effects of fine-scale structures on stress transfer and rupture propagation and examine the mechanisms that control aftershock activity and local triggering of other large events. We run simulations with primary fault structures in state of California and northern Baja California and incorporate complex secondary faults in the Yuha Desert region. These models produce aftershock activity that enables comparison between the observed and predicted distribution and allow for examination of the mechanisms that control them. We investigate how the spatial and temporal distribution of aftershocks are affected by

  15. A broadband chip-scale optical frequency synthesizer at 2.7 × 10−16 relative uncertainty

    PubMed Central

    Huang, Shu-Wei; Yang, Jinghui; Yu, Mingbin; McGuyer, Bart H.; Kwong, Dim-Lee; Zelevinsky, Tanya; Wong, Chee Wei

    2016-01-01

    Optical frequency combs—coherent light sources that connect optical frequencies with microwave oscillations—have become the enabling tool for precision spectroscopy, optical clockwork, and attosecond physics over the past decades. Current benchmark systems are self-referenced femtosecond mode-locked lasers, but Kerr nonlinear dynamics in high-Q solid-state microresonators has recently demonstrated promising features as alternative platforms. The advance not only fosters studies of chip-scale frequency metrology but also extends the realm of optical frequency combs. We report the full stabilization of chip-scale optical frequency combs. The microcomb’s two degrees of freedom, one of the comb lines and the native 18-GHz comb spacing, are simultaneously phase-locked to known optical and microwave references. Active comb spacing stabilization improves long-term stability by six orders of magnitude, reaching a record instrument-limited residual instability of 3.6mHz/τ. Comparing 46 nitride frequency comb lines with a fiber laser frequency comb, we demonstrate the unprecedented microcomb tooth-to-tooth relative frequency uncertainty down to 50 mHz and 2.7 × 10−16, heralding novel solid-state applications in precision spectroscopy, coherent communications, and astronomical spectrography. PMID:27152341

  16. Characteristics of broadband slow earthquakes explained by a Brownian model

    NASA Astrophysics Data System (ADS)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic

  17. Determining on-fault earthquake magnitude distributions from integer programming

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2018-02-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.

  18. Determining on-fault earthquake magnitude distributions from integer programming

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. 

  19. Earthquake sources near Uturuncu Volcano

    NASA Astrophysics Data System (ADS)

    Keyson, L.; West, M. E.

    2013-12-01

    Uturuncu, located in southern Bolivia near the Chile and Argentina border, is a dacitic volcano that was last active 270 ka. It is a part of the Altiplano-Puna Volcanic Complex, which spans 50,000 km2 and is comprised of a series of ignimbrite flare-ups since ~23 ma. Two sets of evidence suggest that the region is underlain by a significant magma body. First, seismic velocities show a low velocity layer consistent with a magmatic sill below depths of 15-20 km. This inference is corroborated by high electrical conductivity between 10km and 30km. This magma body, the so called Altiplano-Puna Magma Body (APMB) is the likely source of volcanic activity in the region. InSAR studies show that during the 1990s, the volcano experienced an average uplift of about 1 to 2 cm per year. The deformation is consistent with an expanding source at depth. Though the Uturuncu region exhibits high rates of crustal seismicity, any connection between the inflation and the seismicity is unclear. We investigate the root causes of these earthquakes using a temporary network of 33 seismic stations - part of the PLUTONS project. Our primary approach is based on hypocenter locations and magnitudes paired with correlation-based relative relocation techniques. We find a strong tendency toward earthquake swarms that cluster in space and time. These swarms often last a few days and consist of numerous earthquakes with similar source mechanisms. Most seismicity occurs in the top 10 kilometers of the crust and is characterized by well-defined phase arrivals and significant high frequency content. The frequency-magnitude relationship of this seismicity demonstrates b-values consistent with tectonic sources. There is a strong clustering of earthquakes around the Uturuncu edifice. Earthquakes elsewhere in the region align in bands striking northwest-southeast consistent with regional stresses.

  20. The 1994 Northridge, California, earthquake: Investigation of rupture velocity, risetime, and high-frequency radiation

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.; Mendoza, C.

    1996-01-01

    A hybrid global search algorithm is used to solve the nonlinear problem of calculating slip amplitude, rake, risetime, and rupture time on a finite fault. Thirty-five strong motion velocity records are inverted by this method over the frequency band from 0.1 to 1.0 Hz for the Northridge earthquake. Four regions of larger-amplitude slip are identified: one near the hypocenter at a depth of 17 km, a second west of the hypocenter at about the same depth, a third updip from the hypocenter at a depth of 10 km, and a fourth updip from the hypocenter and to the northwest. The results further show an initial fast rupture with a velocity of 2.8 to 3.0 km/s followed by a slow termination of the rupture with velocities of 2.0 to 2.5 km/s. The initial energetic rupture phase lasts for 3 s, extending out 10 km from the hypocenter. Slip near the hypocenter has a short risetime of 0.5 s, which increases to 1.5 s for the major slip areas removed from the hypocentral region. The energetic rupture phase is also shown to be the primary source of high-frequency radiation (1-15 Hz) by an inversion of acceleration envelopes. The same global search algorithm is used in the envelope inversion to calculate high-frequency radiation intensity on the fault and rupture time. The rupture timing from the low- and high-frequency inversions is similar, indicating that the high frequencies are produced primarily at the mainshock rupture front. Two major sources of high-frequency radiation are identified within the energetic rupture phase, one at the hypocenter and another deep source to the west of the hypocenter. The source at the hypocenter is associated with the initiation of rupture and the breaking of a high-stress-drop asperity and the second is associated with stopping of the rupture in a westerly direction.

  1. Age of the Subducting Philippine Sea Slab and Mechanism of Low-Frequency Earthquakes

    NASA Astrophysics Data System (ADS)

    Hua, Yuanyuan; Zhao, Dapeng; Xu, Yixian; Liu, Xin

    2018-03-01

    Nonvolcanic low-frequency earthquakes (LFEs) usually occur in young and warm subduction zones under condition of near-lithostatic pore fluid pressure. However, the relation between the LFEs and the subducting slab age has never been documented so far. Here we estimate the lithospheric age of the subducting Philippine Sea (PHS) slab beneath the Nankai arc by linking seismic tomography and a plate reconstruction model. Our results show that the LFEs in SW Japan take place in young parts ( 17-26 Myr) of the PHS slab. However, no LFE occurs beneath the Kii channel where the PHS slab is very young ( 15 Myr) and thin ( 29 km), forming an LFE gap there. According to the present results and previous works, we think that the LFE gap at the Kii channel is caused by joint effects of several factors, including the youngest slab age, high temperature, low fluid content, high permeability of the overlying plate, a slab tear, and hot upwelling flow below the PHS slab.

  2. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the

  3. Spatiotemporal evolution of the completeness magnitude of the Icelandic earthquake catalogue from 1991 to 2013

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; Mignan, Arnaud; Vogfjörð, Kristin S.

    2017-07-01

    In 1991, a digital seismic monitoring network was installed in Iceland with a digital seismic system and automatic operation. After 20 years of operation, we explore for the first time its nationwide performance by analysing the spatiotemporal variations of the completeness magnitude. We use the Bayesian magnitude of completeness (BMC) method that combines local completeness magnitude observations with prior information based on the density of seismic stations. Additionally, we test the impact of earthquake location uncertainties on the BMC results, by filtering the catalogue using a multivariate analysis that identifies outliers in the hypocentre error distribution. We find that the entire North-to-South active rift zone shows a relatively low magnitude of completeness Mc in the range 0.5-1.0, highlighting the ability of the Icelandic network to detect small earthquakes. This work also demonstrates the influence of earthquake location uncertainties on the spatiotemporal magnitude of completeness analysis.

  4. Earthquake Forecasting Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B. M.; Nava Pichardo, F. A.; Glowacka, E.; Gomez-Trevino, E.

    2015-12-01

    Large earthquakes have semi-periodic behavior as result of critically self-organized processes of stress accumulation and release in some seismogenic region. Thus, large earthquakes in a region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. Nava et al., 2013 and Quinteros et al., 2013 realized that not all earthquakes in a given region need belong to the same sequence, since there can be more than one process of stress accumulation and release in it; they also proposed a method to identify semi-periodic sequences through analytic Fourier analysis. This work presents improvements on the above-mentioned method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification, which means that earthquake occurrence times are treated as a labeled point process; the estimation of appropriate upper limit uncertainties to use in forecasts; and the use of Bayesian analysis to evaluate the forecast performance. This improved method is applied to specific regions: the southwestern coast of Mexico, the northeastern Japan Arc, the San Andreas Fault zone at Parkfield, and northeastern Venezuela.

  5. Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.

    2010-12-01

    Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.

  6. Physics of Earthquake Disaster: From Crustal Rupture to Building Collapse

    NASA Astrophysics Data System (ADS)

    Uenishi, Koji

    2018-05-01

    Earthquakes of relatively greater magnitude may cause serious, sometimes unexpected failures of natural and human-made structures, either on the surface, underground, or even at sea. In this review, by treating several examples of extraordinary earthquake-related failures that range from the collapse of every second building in a commune to the initiation of spontaneous crustal rupture at depth, we consider the physical background behind the apparently abnormal earthquake disaster. Simple but rigorous dynamic analyses reveal that such seemingly unusual failures actually occurred for obvious reasons, which may remain unrecognized in part because in conventional seismic analyses only kinematic aspects of the effects of lower-frequency seismic waves below 1 Hz are normally considered. Instead of kinematics, some dynamic approach that takes into account the influence of higher-frequency components of waves over 1 Hz will be needed to anticipate and explain such extraordinary phenomena and mitigate the impact of earthquake disaster in the future.

  7. a Collaborative Cyberinfrastructure for Earthquake Seismology

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  8. Issues on the Japanese Earthquake Hazard Evaluation

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Fukushima, Y.; Sagiya, T.

    2013-12-01

    The 2011 Great East Japan Earthquake forced the policy of counter-measurements to earthquake disasters, including earthquake hazard evaluations, to be changed in Japan. Before the March 11, Japanese earthquake hazard evaluation was based on the history of earthquakes that repeatedly occurs and the characteristic earthquake model. The source region of an earthquake was identified and its occurrence history was revealed. Then the conditional probability was estimated using the renewal model. However, the Japanese authorities changed the policy after the megathrust earthquake in 2011 such that the largest earthquake in a specific seismic zone should be assumed on the basis of available scientific knowledge. According to this policy, three important reports were issued during these two years. First, the Central Disaster Management Council issued a new estimate of damages by a hypothetical Mw9 earthquake along the Nankai trough during 2011 and 2012. The model predicts a 34 m high tsunami on the southern Shikoku coast and intensity 6 or higher on the JMA scale in most area of Southwest Japan as the maximum. Next, the Earthquake Research Council revised the long-term earthquake hazard evaluation of earthquakes along the Nankai trough in May 2013, which discarded the characteristic earthquake model and put much emphasis on the diversity of earthquakes. The so-called 'Tokai' earthquake was negated in this evaluation. Finally, another report by the CDMC concluded that, with the current knowledge, it is hard to predict the occurrence of large earthquakes along the Nankai trough using the present techniques, based on the diversity of earthquake phenomena. These reports created sensations throughout the country and local governments are struggling to prepare counter-measurements. These reports commented on large uncertainty in their evaluation near their ends, but are these messages transmitted properly to the public? Earthquake scientists, including authors, are involved in

  9. Estimation of Source Parameters of Historical Major Earthquakes from 1900 to 1970 around Asia and Analysis of Their Uncertainties

    NASA Astrophysics Data System (ADS)

    Han, J.; Zhou, S.

    2017-12-01

    Asia, located in the conjoined areas of Eurasian, Pacific, and Indo-Australian plates, is the continent with highest seismicity. Earthquake catalogue on the bases of modern seismic network recordings has been established since around 1970 in Asia and the earthquake catalogue before 1970 was much more inaccurate because of few stations. With a history of less than 50 years of modern earthquake catalogue, researches in seismology are quite limited. After the appearance of improved Earth velocity structure model, modified locating method and high-accuracy Optical Character Recognition technique, travel time data of earthquakes from 1900 to 1970 can be included in research and more accurate locations can be determined for historical earthquakes. Hence, parameters of these historical earthquakes can be obtained more precisely and some research method such as ETAS model can be used in a much longer time scale. This work focuses on the following three aspects: (1) Relocating more than 300 historical major earthquakes (M≥7.0) in Asia based on the Shide Circulars, International Seismological Summary and EHB Bulletin instrumental records between 1900 and 1970. (2) Calculating the focal mechanisms of more than 50 events by first motion records of P wave of ISS. (3) Based on the geological data, tectonic stress field and the result of relocation, inferring focal mechanisms of historical major earthquakes.

  10. Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise

    NASA Astrophysics Data System (ADS)

    Scott, Chelsea Phipps

    Spatially dense maps of coseismic deformation derived from Interferometric Synthetic Aperture Radar (InSAR) datasets result in valuable constraints on earthquake processes. The recent increase in the quantity of observations of coseismic deformation facilitates the examination of signals in many tectonic environments associated with earthquakes of varying magnitude. Efforts to place robust constraints on the evolution of the crustal stress field following great earthquakes often rely on knowledge of the earthquake location, the fault geometry, and the distribution of slip along the fault plane. Well-characterized uncertainties and biases strengthen the quality of inferred earthquake source parameters, particularly when the associated ground displacement signals are near the detection limit. Well-preserved geomorphic records of earthquakes offer additional insight into the mechanical behavior of the shallow crust and the kinematics of plate boundary systems. Together, geodetic and geologic observations of crustal deformation offer insight into the processes that drive seismic cycle deformation over a range of timescales. In this thesis, I examine several challenges associated with the inversion of earthquake source parameters from SAR data. Variations in atmospheric humidity, temperature, and pressure at the timing of SAR acquisitions result in spatially correlated phase delays that are challenging to distinguish from signals of real ground deformation. I characterize the impact of atmospheric noise on inferred earthquake source parameters following elevation-dependent atmospheric corrections. I analyze the spatial and temporal variations in the statistics of atmospheric noise from both reanalysis weather models and InSAR data itself. Using statistics that reflect the spatial heterogeneity of atmospheric characteristics, I examine parameter errors for several synthetic cases of fault slip on a basin-bounding normal fault. I show a decrease in uncertainty in fault

  11. Low-Frequency Earthquakes Associated with the Late-Interseismic Central Alpine Fault, Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Baratin, L. M.; Chamberlain, C. J.; Townend, J.; Savage, M. K.

    2016-12-01

    Characterising the seismicity associated with slow deformation in the vicinity of the Alpine Fault may provide constraints on the state of stress of this major transpressive margin prior to a large (≥M8) earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault toward an anticipated large rupture. We initially work with a continous seismic dataset collected between 2009 and 2012 from an array of short-period seismometers, the Southern Alps Microearthquake Borehole Array. Fourteen primary LFE templates are used in an iterative matched-filter and stacking routine. This method allows the detection of similar signals and establishes LFE families with common locations. We thus generate a 36 month catalogue of 10718 LFEs. The detections are then combined for each LFE family using phase-weighted stacking to yield a signal with the highest possible signal to noise ratio. We found phase-weighted stacking to be successful in increasing the number of LFE detections by roughly 20%. Phase-weighted stacking also provides cleaner phase arrivals of apparently impulsive nature allowing more precise phase and polarity picks. We then compute improved non-linear earthquake locations using a 3D velocity model. We find LFEs to occur below the seismogenic zone at depths of 18-34 km, locating on or near the proposed deep extent of the Alpine Fault. Our next step is to estimate seismic source parameters by implementing a moment tensor inversion technique. Our focus is currently on generating a more extensive catalogue (spanning the years 2009 to 2016) using synthetic waveforms as primary templates, with which to detect LFEs. Initial testing shows that this technique paired up with phase-weighted stacking increases the number of LFE families and overall detected events roughly sevenfold. This catalogue should provide new insight into the geometry of the Alpine Fault and the prevailing stress

  12. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  13. Structures of Xishan village landslide in Li County, Sichuan, China, inferred from high-frequency receiver functions of local earthquakes

    NASA Astrophysics Data System (ADS)

    Wei, Z.; Chu, R.

    2017-12-01

    Teleseismic receiver function methods are widely used to study the deep structural information beneath the seismic station. However, teleseismic waveforms are difficult to extract the high-frequency receiver function, which are insufficient to constrain the shallow structure because of the inelastic attenuation effect of the earth. In this study, using the local earthquake waveforms collected from 3 broadband stations deployed on the Xishan village landslide in Li County in Sichuan Province, we used the high-frequency receiver function method to study the shallow structure beneath the landslide. We developed the Vp-k (Vp/Vs) staking method of receiver functions, and combined with the H-k stacking and waveform inversion methods of receiver functions to invert the landslide's thickness, S-wave velocity and average Vp/Vs ratio beneath these stations, and compared the thickness with the borehole results. Our results show small-scale lateral variety of velocity structure, a 78-143m/s lower S-wave velocity in the bottom layer and 2.4-3.1 Vp/Vs ratio in the landslide. The observed high Vp/Vs ratio and low S-wave velocity in the bottom layer of the landslide are consistent with low electrical resistivity and water-rich in the bottom layer, suggesting a weak shear strength and potential danger zone in landslide h1. Our study suggest that the local earthquake receiver function can obtain the shallow velocity structural information and supply some seismic constrains for the landslide catastrophe mitigation.

  14. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  15. Analysis of frequency shifting in seismic signals using Gabor-Wigner transform

    NASA Astrophysics Data System (ADS)

    Kumar, Roshan; Sumathi, P.; Kumar, Ashok

    2015-12-01

    A hybrid time-frequency method known as Gabor-Wigner transform (GWT) is introduced in this paper for examining the time-frequency patterns of earthquake damaged buildings. GWT is developed by combining the Gabor transform (GT) and Wigner-Ville distribution (WVD). GT and WVD have been used separately on synthetic and recorded earthquake data to identify frequency shifting due to earthquake damages, but GT is prone to windowing effect and WVD involves ambiguity function. Hence to obtain better clarity and to remove the cross terms (frequency interference), GT and WVD are judiciously combined and the resultant GWT used to identify frequency shifting. Synthetic seismic response of an instrumented building and real-time earthquake data recorded on the building were investigated using GWT. It is found that GWT offers good accuracy for even slow variations in frequency, good time-frequency resolution, and localized response. Presented results confirm the efficacy of GWT when compared with GT and WVD used separately. Simulation results were quantified by the Renyi entropy measures and GWT shown to be an adequate technique in identifying localized response for structural damage detection.

  16. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  17. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  18. Adaptive Optimal Control Using Frequency Selective Information of the System Uncertainty With Application to Unmanned Aircraft.

    PubMed

    Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian

    2018-01-01

    A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.

  19. On near-source earthquake triggering

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2009-01-01

    When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.

  20. The 2015 Illapel earthquake, central Chile: A type case for a characteristic earthquake?

    NASA Astrophysics Data System (ADS)

    Tilmann, F.; Zhang, Y.; Moreno, M.; Saul, J.; Eckelmann, F.; Palo, M.; Deng, Z.; Babeyko, A.; Chen, K.; Baez, J. C.; Schurr, B.; Wang, R.; Dahm, T.

    2016-01-01

    On 16 September 2015, the MW = 8.2 Illapel megathrust earthquake ruptured the Central Chilean margin. Combining inversions of displacement measurements and seismic waveforms with high frequency (HF) teleseismic backprojection, we derive a comprehensive description of the rupture, which also predicts deep ocean tsunami wave heights. We further determine moment tensors and obtain accurate depth estimates for the aftershock sequence. The earthquake nucleated near the coast but then propagated to the north and updip, attaining a peak slip of 5-6 m. In contrast, HF seismic radiation is mostly emitted downdip of the region of intense slip and arrests earlier than the long period rupture, indicating smooth slip along the shallow plate interface in the final phase. A superficially similar earthquake in 1943 with a similar aftershock zone had a much shorter source time function, which matches the duration of HF seismic radiation in the recent event, indicating that the 1943 event lacked the shallow slip.

  1. Maximum magnitude earthquakes induced by fluid injection

    USGS Publications Warehouse

    McGarr, Arthur F.

    2014-01-01

    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  2. Constraints on the long-period moment-dip tradeoff for the Tohoku earthquake

    USGS Publications Warehouse

    Tsai, V.C.; Hayes, G.P.; Duputel, Z.

    2011-01-01

    Since the work of Kanamori and Given (1981), it has been recognized that shallow, pure dip-slip earthquakes excite long-period surface waves such that it is difficult to independently constrain the moment (M0) and the dip (??) of the source mechanism, with only the product M0 sin(2??) being well constrained. Because of this, it is often assumed that the primary discrepancies between the moments of shallow, thrust earthquakes are due to this moment-dip tradeoff. In this work, we quantify how severe this moment-dip tradeoff is depending on the depth of the earthquake, the station distribution, the closeness of the mechanism to pure dip-slip, and the quality of the data. We find that both long-period Rayleigh and Love wave modes have moment-dip resolving power even for shallow events, especially when stations are close to certain azimuths with respect to mechanism strike and when source depth is well determined. We apply these results to USGS W phase inversions of the recent M9.0 Tohoku, Japan earthquake and estimate the likely uncertainties in dip and moment associated with the moment-dip tradeoff. After discussing some of the important sources of moment and dip error, we suggest two methods for potentially improving this uncertainty. Copyright 2011 by the American Geophysical Union.

  3. Multi-Array Back-Projections of The 2015 Gorkha Earthquake With Physics-Based Aftershock Calibrations

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhang, A.; Yagi, Y.

    2015-12-01

    The 2015 Mw 7.8 Nepal-Gorkha earthquake with casualties of over 9,000 people is the most devastating disaster to strike Nepal since the 1934 Nepal-Bihar earthquake. Its rupture process is well imaged by the teleseismic MUSIC back-projections (BP). Here, we perform independent back-projections of high-frequency recordings (0.5-2 Hz) from the Australian seismic network (AU), the North America network (NA) and the European seismic network (EU), located in complementary orientations. Our results of all three arrays show unilateral linear rupture path to the east of the hypocenter. But the propagating directions and the inferred rupture speeds differ significantly among different arrays. To understand the spatial uncertainties of the BP analysis, we image four moderate-size (M5~6) aftershocks based on the timing correction derived from the alignment of the initial P-wave of the mainshock. We find that the apparent source locations inferred from BP are systematically biased along the source-array orientation, which can be explained by the uncertainty of the 3D velocity structure deviated from the 1D reference model (e.g. IASP91). We introduced a slowness error term in travel time as a first-order calibration that successfully mitigates the source location discrepancies of different arrays. The calibrated BP results of three arrays are mutually consistent and reveal a unilateral rupture propagating eastward at a speed of 2.7 km/s along the down-dip edge of the locked Himalaya thrust zone over ~ 150 km, in agreement with a narrow slip distribution inferred from finite source inversions.

  4. M ≥ 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

    USGS Publications Warehouse

    Parsons, Thomas E.

    2006-01-01

     Forecasting M ≥ 7.0 San Andreas fault earthquakes requires an assessment of their expected frequency. I used a three-dimensional finite element model of California to calculate volumetric static stress drops from scenario M ≥ 7.0 earthquakes on three San Andreas fault sections. The ratio of stress drop to tectonic stressing rate derived from geodetic displacements yielded recovery times at points throughout the model volume. Under a renewal model, stress recovery times on ruptured fault planes can be a proxy for earthquake recurrence. I show curves of magnitude versus stress recovery time for three San Andreas fault sections. When stress recovery times were converted to expected M ≥ 7.0 earthquake frequencies, they fit Gutenberg-Richter relationships well matched to observed regional rates of M ≤ 6.0 earthquakes. Thus a stress-balanced model permits large earthquake Gutenberg-Richter behavior on an individual fault segment, though it does not require it. Modeled slip magnitudes and their expected frequencies were consistent with those observed at the Wrightwood paleoseismic site if strict time predictability does not apply to the San Andreas fault.

  5. Dual megathrust slip behaviors of the 2014 Iquique earthquake sequence

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen; Huang, Hui; Bürgmann, Roland; Ampuero, Jean Paul; Strader, Anne

    2015-02-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A Mw 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of northern Chile. This event was preceded by a long foreshock sequence including a 2-week-long migration of seismicity initiated by a Mw 6.7 earthquake. Repeating earthquakes were found among the foreshock sequence that migrated towards the mainshock hypocenter, suggesting a large-scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence times of the repeating earthquakes highlight the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while repeaters that occurred both before and after the mainshock were in the area complementary to the mainshock rupture. The spatiotemporal distribution of the repeating earthquakes illustrates the essential role of propagating aseismic slip leading up to the mainshock and illuminates the distribution of postseismic afterslip. Various finite fault models indicate that the largest coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show an emergent onset of moment rate at low frequency (< 0.1 Hz), while back-projection shows a steady increase of high frequency power (> 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the rupture expands in rich bursts along the rim of a semi-elliptical region with episodes of re-ruptures, suggesting delayed failure of asperities. The high-frequency rupture remains within an

  6. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    USGS Publications Warehouse

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  7. A new source process for evolving repetitious earthquakes at Ngauruhoe volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Jolly, A. D.; Neuberg, J.; Jousset, P.; Sherburn, S.

    2012-02-01

    Since early 2005, Ngauruhoe volcano has produced repeating low-frequency earthquakes with evolving waveforms and spectral features which become progressively enriched in higher frequency energy during the period 2005 to 2009, with the trend reversing after that time. The earthquakes also show a seasonal cycle since January 2006, with peak numbers of events occurring in the spring and summer period and lower numbers of events at other times. We explain these patterns by the excitation of a shallow two-phase water/gas or water/steam cavity having temporal variations in volume fraction of bubbles. Such variations in two-phase systems are known to produce a large range of acoustic velocities (2-300 m/s) and corresponding changes in impedance contrast. We suggest that an increasing bubble volume fraction is caused by progressive heating of melt water in the resonant cavity system which, in turn, promotes the scattering excitation of higher frequencies, explaining both spectral shift and seasonal dependence. We have conducted a constrained waveform inversion and grid search for moment, position and source geometry for the onset of two example earthquakes occurring 17 and 19 January 2008, a time when events showed a frequency enrichment episode occurring over a period of a few days. The inversion and associated error analysis, in conjunction with an earthquake phase analysis show that the two earthquakes represent an excitation of a single source position and geometry. The observed spectral changes from a stationary earthquake source and geometry suggest that an evolution in both near source resonance and scattering is occurring over periods from days to months.

  8. Analysis of Earthquake Recordings Obtained from the Seafloor Earthquake Measurement System (SEMS) Instruments Deployed off the Coast of Southern California

    USGS Publications Warehouse

    Boore, D.M.; Smith, C.E.

    1999-01-01

    For more than 20 years, a program has been underway to obtain records of earthquake shaking on the seafloor at sites offshore of southern California, near oil platforms. The primary goal of the program is to obtain data that can help determine if ground motions at offshore sites are significantly different than those at onshore sites; if so, caution may be necessary in using onshore motions as the basis for the seismic design of oil platforms. We analyze data from eight earthquakes recorded at six offshore sites; these are the most important data recorded on these stations to date. Seven of the earthquakes were recorded at only one offshore station; the eighth event was recorded at two sites. The earthquakes range in magnitude from 4.7 to 6.1. Because of the scarcity of multiple recordings from any one event, most of the analysis is based on the ratio of spectra from vertical and horizontal components of motion. The results clearly show that the offshore motions have very low vertical motions compared to those from an average onshore site, particularly at short periods. Theoretical calculations find that the water layer has little effect on the horizontal components of motion but that it produces a strong spectral null on the vertical component at the resonant frequency of P waves in the water layer. The vertical-to-horizontal ratios for a few selected onshore sites underlain by relatively low shear-wave velocities are similar to the ratios from offshore sites for frequencies less than about one-half the water layer P-wave resonant frequency, suggesting that the shear-wave velocities beneath a site are more important than the water layer in determining the character of the ground motions at lower frequencies.

  9. The wicked problem of earthquake hazard in developing countries: the example of Bangladesh

    NASA Astrophysics Data System (ADS)

    Steckler, M. S.; Akhter, S. H.; Stein, S.; Seeber, L.

    2017-12-01

    Many developing nations in earthquake-prone areas confront a tough problem: how much of their limited resources to use mitigating earthquake hazards? This decision is difficult because it is unclear when an infrequent major earthquake may happen, how big it could be, and how much harm it may cause. This issue faces nations with profound immediate needs and ongoing rapid urbanization. Earthquake hazard mitigation in Bangladesh is a wicked problem. It is the world's most densely populated nation, with 160 million people in an area the size of Iowa. Complex geology and sparse data make assessing a possibly-large earthquake hazard difficult. Hence it is hard to decide how much of the limited resources available should be used for earthquake hazard mitigation, given other more immediate needs. Per capita GDP is $1200, so Bangladesh is committed to economic growth and resources are needed to address many critical challenges and hazards. In their subtropical environment, rural Bangladeshis traditionally relied on modest mud or bamboo homes. Their rapidly growing, crowded capital, Dhaka, is filled with multistory concrete buildings likely to be vulnerable to earthquakes. The risk is compounded by the potential collapse of services and accessibility after a major temblor. However, extensive construction as the population shifts from rural to urban provides opportunity for earthquake-risk reduction. While this situation seems daunting, it is not hopeless. Robust risk management is practical, even for developing nations. It involves recognizing uncertainties and developing policies that should give a reasonable outcome for a range of the possible hazard and loss scenarios. Over decades, Bangladesh has achieved a thousandfold reduction in risk from tropical cyclones by building shelters and setting up a warning system. Similar efforts are underway for earthquakes. Smart investments can be very effective, even if modest. Hence, we suggest strategies consistent with high

  10. Possible cause for an improbable earthquake: The 1997 MW 4.9 southern Alabama earthquake and hydrocarbon recovery

    USGS Publications Warehouse

    Gomberg, J.; Wolf, L.

    1999-01-01

    Circumstantial and physical evidence indicates that the 1997 MW 4.9 earthquake in southern Alabama may have been related to hydrocarbon recovery. Epicenters of this earthquake and its aftershocks were located within a few kilometers of active oil and gas extraction wells and two pressurized injection wells. Main shock and aftershock focal depths (2-6 km) are within a few kilometers of the injection and withdrawal depths. Strain accumulation at geologic rates sufficient to cause rupture at these shallow focal depths is not likely. A paucity of prior seismicity is difficult to reconcile with the occurrence of an earthquake of MW 4.9 and a magnitude-frequency relationship usually assumed for natural earthquakes. The normal-fault main-shock mechanism is consistent with reactivation of preexisting faults in the regional tectonic stress field. If the earthquake were purely tectonic, however, the question arises as to why it occurred on only the small fraction of a large, regional fault system coinciding with active hydrocarbon recovery. No obvious temporal correlation is apparent between the earthquakes and recovery activities. Although thus far little can be said quantitatively about the physical processes that may have caused the 1997 sequence, a plausible explanation involves the poroelastic response of the crust to extraction of hydrocarbons.

  11. Long-Term Impact of Earthquakes on Sleep Quality

    PubMed Central

    Tempesta, Daniela; Curcio, Giuseppe; De Gennaro, Luigi; Ferrara, Michele

    2013-01-01

    Purpose We investigated the impact of the 6.3 magnitude 2009 L’Aquila (Italy) earthquake on standardized self-report measures of sleep quality (Pittsburgh Sleep Quality Index, PSQI) and frequency of disruptive nocturnal behaviours (Pittsburgh Sleep Quality Index-Addendum, PSQI-A) two years after the natural disaster. Methods Self-reported sleep quality was assessed in 665 L’Aquila citizens exposed to the earthquake compared with a different sample (n = 754) of L'Aquila citizens tested 24 months before the earthquake. In addition, sleep quality and disruptive nocturnal behaviours (DNB) of people exposed to the traumatic experience were compared with people that in the same period lived in different areas ranging between 40 and 115 km from the earthquake epicenter (n = 3574). Results The comparison between L’Aquila citizens before and after the earthquake showed a significant deterioration of sleep quality after the exposure to the trauma. In addition, two years after the earthquake L'Aquila citizens showed the highest PSQI scores and the highest incidence of DNB compared to subjects living in the surroundings. Interestingly, above-the-threshold PSQI scores were found in the participants living within 70 km from the epicenter, while trauma-related DNBs were found in people living in a range of 40 km. Multiple regressions confirmed that proximity to the epicenter is predictive of sleep disturbances and DNB, also suggesting a possible mediating effect of depression on PSQI scores. Conclusions The psychological effects of an earthquake may be much more pervasive and long-lasting of its building destruction, lasting for years and involving a much larger population. A reduced sleep quality and an increased frequency of DNB after two years may be a risk factor for the development of depression and posttraumatic stress disorder. PMID:23418478

  12. Long-term impact of earthquakes on sleep quality.

    PubMed

    Tempesta, Daniela; Curcio, Giuseppe; De Gennaro, Luigi; Ferrara, Michele

    2013-01-01

    We investigated the impact of the 6.3 magnitude 2009 L'Aquila (Italy) earthquake on standardized self-report measures of sleep quality (Pittsburgh Sleep Quality Index, PSQI) and frequency of disruptive nocturnal behaviours (Pittsburgh Sleep Quality Index-Addendum, PSQI-A) two years after the natural disaster. Self-reported sleep quality was assessed in 665 L'Aquila citizens exposed to the earthquake compared with a different sample (n = 754) of L'Aquila citizens tested 24 months before the earthquake. In addition, sleep quality and disruptive nocturnal behaviours (DNB) of people exposed to the traumatic experience were compared with people that in the same period lived in different areas ranging between 40 and 115 km from the earthquake epicenter (n = 3574). The comparison between L'Aquila citizens before and after the earthquake showed a significant deterioration of sleep quality after the exposure to the trauma. In addition, two years after the earthquake L'Aquila citizens showed the highest PSQI scores and the highest incidence of DNB compared to subjects living in the surroundings. Interestingly, above-the-threshold PSQI scores were found in the participants living within 70 km from the epicenter, while trauma-related DNBs were found in people living in a range of 40 km. Multiple regressions confirmed that proximity to the epicenter is predictive of sleep disturbances and DNB, also suggesting a possible mediating effect of depression on PSQI scores. The psychological effects of an earthquake may be much more pervasive and long-lasting of its building destruction, lasting for years and involving a much larger population. A reduced sleep quality and an increased frequency of DNB after two years may be a risk factor for the development of depression and posttraumatic stress disorder.

  13. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  14. Systematic Detection of Remotely Triggered Seismicity in Africa Following Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Ayorinde, A. O.; Peng, Z.; Yao, D.; Bansal, A. R.

    2016-12-01

    It is well known that large distant earthquakes can trigger micro-earthquakes/tectonic tremors during or immediately following their surface waves. Globally, triggered earthquakes have been mostly found in active plate boundary regions. It is not clear whether they could occur within stable intraplate regions in Africa as well as the active East African Rift Zone. In this study we conduct a systematic study of remote triggering in Africa following recent large earthquakes, including the 2004 Mw9.1 Sumatra and 2012 Mw8.6 Indian Ocean earthquakes. In particular, the 2012 Indian Ocean earthquake is the largest known strike slip earthquake and has triggered a global increase of magnitude larger than 5.5 earthquakes as well as numerous micro-earthquakes/tectonic tremors around the world. The entire Africa region was examined for possible remotely triggered seismicity using seismic data downloaded from the Incorporated Research Institutes for Seismology (IRIS) Data Management Center (DMC) and GFZ German Research Center for Geosciences. We apply a 5-Hz high-pass-filter to the continuous waveforms and visually identify high-frequency signals during and immediately after the large amplitude surface waves. Spectrograms are computed as additional tools to identify triggered seismicities and we further confirm them by statistical analysis comparing the high-frequency signals before and after the distant mainshocks. So far we have identified possible triggered seismicity in Botswana and northern Madagascar. This study could help to understand dynamic triggering in diverse tectonic settings of the African continent.

  15. Assessment of Uncertainties Related to Seismic Hazard Using Fuzzy Analysis

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, N.; Yokoi, T.; Javakhishvili, Z.

    2013-05-01

    Seismic hazard analysis in last few decades has been become very important issue. Recently, new technologies and available data have been improved that helped many scientists to understand where and why earthquakes happen, physics of earthquakes, etc. They have begun to understand the role of uncertainty in Seismic hazard analysis. However, there is still significant problem how to handle existing uncertainty. The same lack of information causes difficulties to quantify uncertainty accurately. Usually attenuation curves are obtained in statistical way: regression analysis. Statistical and probabilistic analysis show overlapped results for the site coefficients. This overlapping takes place not only at the border between two neighboring classes, but also among more than three classes. Although the analysis starts from classifying sites using the geological terms, these site coefficients are not classified at all. In the present study, this problem is solved using Fuzzy set theory. Using membership functions the ambiguities at the border between neighboring classes can be avoided. Fuzzy set theory is performed for southern California by conventional way. In this study standard deviations that show variations between each site class obtained by Fuzzy set theory and classical way are compared. Results on this analysis show that when we have insufficient data for hazard assessment site classification based on Fuzzy set theory shows values of standard deviations less than obtained by classical way which is direct proof of less uncertainty.

  16. The 1170 and 1202 CE Dead Sea Rift earthquakes and long-term magnitude distribution of the Dead Sea Fault zone

    USGS Publications Warehouse

    Hough, S.E.; Avni, R.

    2009-01-01

    In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.

  17. Comparison of injury epidemiology between the Wenchuan and Lushan earthquakes in Sichuan, China.

    PubMed

    Hu, Yang; Zheng, Xi; Yuan, Yong; Pu, Qiang; Liu, Lunxu; Zhao, Yongfan

    2014-12-01

    We aimed to compare injury characteristics and the timing of admissions and surgeries in the Wenchuan earthquake in 2008 and the Lushan earthquake in 2013. We retrospectively compared the admission and operating times and injury profiles of patients admitted to our medical center during both earthquakes. We also explored the relationship between seismic intensity and injury type. The time from earthquake onset to the peak in patient admissions and surgeries differed between the 2 earthquakes. In the Wenchuan earthquake, injuries due to being struck by objects or being buried were more frequent than other types of injuries, and more patients suffered injuries of the extremities than thoracic injuries or brain trauma. In the Lushan earthquake, falls were the most common injury, and more patients suffered thoracic trauma or brain injuries. The types of injury seemed to vary with seismic intensity, whereas the anatomical location of the injury did not. Greater seismic intensity of an earthquake is associated with longer delay between the event and the peak in patient admissions and surgeries, higher frequencies of injuries due to being struck or buried, and lower frequencies of injuries due to falls and injuries to the chest and brain. These insights may prove useful for planning rescue interventions in trauma centers near the epicenter.

  18. Statistical Evaluations of Variations in Dairy Cows’ Milk Yields as a Precursor of Earthquakes

    PubMed Central

    Yamauchi, Hiroyuki; Hayakawa, Masashi; Asano, Tomokazu; Ohtani, Nobuyo; Ohta, Mitsuaki

    2017-01-01

    Simple Summary There are many reports of abnormal changes occurring in various natural systems prior to earthquakes. Unusual animal behavior is one of these abnormalities; however, there are few objective indicators and to date, reliability has remained uncertain. We found that milk yields of dairy cows decreased prior to an earthquake in our previous case study. In this study, we examined the reliability of decreases in milk yields as a precursor for earthquakes using long-term observation data. In the results, milk yields decreased approximately three weeks before earthquakes. We have come to the conclusion that dairy cow milk yields have applicability as an objectively observable unusual animal behavior prior to earthquakes, and dairy cows respond to some physical or chemical precursors of earthquakes. Abstract Previous studies have provided quantitative data regarding unusual animal behavior prior to earthquakes; however, few studies include long-term, observational data. Our previous study revealed that the milk yields of dairy cows decreased prior to an extremely large earthquake. To clarify whether the milk yields decrease prior to earthquakes, we examined the relationship between earthquakes of various magnitudes and daily milk yields. The observation period was one year. In the results, cross-correlation analyses revealed a significant negative correlation between earthquake occurrence and milk yields approximately three weeks beforehand. Approximately a week and a half beforehand, a positive correlation was revealed, and the correlation gradually receded to zero as the day of the earthquake approached. Future studies that use data from a longer observation period are needed because this study only considered ten earthquakes and therefore does not have strong statistical power. Additionally, we compared the milk yields with the subionospheric very low frequency/low frequency (VLF/LF) propagation data indicating ionospheric perturbations. The results showed

  19. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    USGS Publications Warehouse

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.

  20. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    NASA Astrophysics Data System (ADS)

    Hayes, Gavin P.

    2017-06-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques. I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called ;moment deficit,; calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of ;earthquake super-cycles; observed in some global subduction zones.

  1. Exploring Earthquakes in Real-Time

    NASA Astrophysics Data System (ADS)

    Bravo, T. K.; Kafka, A. L.; Coleman, B.; Taber, J. J.

    2013-12-01

    Earthquakes capture the attention of students and inspire them to explore the Earth. Adding the ability to view and explore recordings of significant and newsworthy earthquakes in real-time makes the subject even more compelling. To address this opportunity, the Incorporated Research Institutions for Seismology (IRIS), in collaboration with Moravian College, developed ';jAmaSeis', a cross-platform application that enables students to access real-time earthquake waveform data. Students can watch as the seismic waves are recorded on their computer, and can be among the first to analyze the data from an earthquake. jAmaSeis facilitates student centered investigations of seismological concepts using either a low-cost educational seismograph or streamed data from other educational seismographs or from any seismic station that sends data to the IRIS Data Management System. After an earthquake, students can analyze the seismograms to determine characteristics of earthquakes such as time of occurrence, distance from the epicenter to the station, magnitude, and location. The software has been designed to provide graphical clues to guide students in the analysis and assist in their interpretations. Since jAmaSeis can simultaneously record up to three stations from anywhere on the planet, there are numerous opportunities for student driven investigations. For example, students can explore differences in the seismograms from different distances from an earthquake and compare waveforms from different azimuthal directions. Students can simultaneously monitor seismicity at a tectonic plate boundary and in the middle of the plate regardless of their school location. This can help students discover for themselves the ideas underlying seismic wave propagation, regional earthquake hazards, magnitude-frequency relationships, and the details of plate tectonics. The real-time nature of the data keeps the investigations dynamic, and offers students countless opportunities to explore.

  2. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  3. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  4. Why the New Madrid earthquakes are M 7–8 and the Charleston earthquake is ∼M 7

    USGS Publications Warehouse

    Cramer, Chris H.; Boyd, Oliver

    2014-01-01

    Estimates of magnitudes of large historical earthquakes are an essential input to and can seriously affect seismic‐hazard estimates. The earthquake‐intensity observations, modified Mercalli intensities (MMI), and assigned magnitudes Mof the 1811–1812 New Madrid events have been reinterpreted several times in the last decade and have been a source of controversy in making seismic‐hazard estimates in the central United States. Observations support the concept that the larger the earthquake, the greater the maximum‐felt distance. For the same crustal attenuation and local soil conditions, magnitude should be the main influence on intensity values at large distances. We apply this concept by comparing the mean MMI at distances of 600–1200 km for each of the four largest New Madrid 1811–1812 earthquakes, the 1886 Charleston, South Carolina, earthquake, the 1929 M 7.2 Grand Banks earthquake, and the 2001M 7.6 Bhuj, India, earthquake. We fit the intensity observations using the form MMI=A+C×dist−0.8×log(dist) to better define intensity attenuation in eastern North America (ENA). The intensity attenuation in cratonic India differs from ENA and is corrected to ENA using both the above estimate and published intensity relations. We evaluate source, marine geophysical, Q, and stress‐drop information, as well as a 1929 Milne–Shaw record at Chicago to confirm that the 1929 Grand Banks earthquake occurred in ENA crust. Our direct comparison of mean intensities beyond 600 km suggests M 7.5, 7.3, 7.7, and 6.9 for the three New Madrid 1811–1812 mainshocks and the largest aftershock and M 7.0 for the 1886 Charleston, South Carolina, earthquake, with an estimated uncertainty of 0.3 units at the 95% confidence level (based on a Monte Carlo analysis). Our mean New Madrid and Charleston mainshock magnitudes are similar to those of Bakun and Hopper (2004) and are much higher than those of Hough and Page (2011) for New Madrid.

  5. Large earthquake rates from geologic, geodetic, and seismological perspectives

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes

  6. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  7. Improvements to Earthquake Location with a Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Gökalp, Hüseyin

    2018-01-01

    In this study, improvements to the earthquake location method were investigated using a fuzzy logic approach proposed by Lin and Sanford (Bull Seismol Soc Am 91:82-93, 2001). The method has certain advantages compared to the inverse methods in terms of eliminating the uncertainties of arrival times and reading errors. In this study, adopting this approach, epicentral locations were determined based on the results of a fuzzy logic space concerning the uncertainties in the velocity models. To map the uncertainties in arrival times into the fuzzy logic space, a trapezoidal membership function was constructed by directly using the travel time difference between the two stations for the P- and S-arrival times instead of the P- and S-wave models to eliminate the need for obtaining information concerning the velocity structure of the study area. The results showed that this method worked most effectively when earthquakes occurred away from a network or when the arrival time data contained phase reading errors. In this study, to resolve the problems related to determining the epicentral locations of the events, a forward modeling method like the grid search technique was used by applying different logical operations (i.e., intersection, union, and their combination) with a fuzzy logic approach. The locations of the events were depended on results of fuzzy logic outputs in fuzzy logic space by searching in a gridded region. The process of location determination with the defuzzification of only the grid points with the membership value of 1 obtained by normalizing all the maximum fuzzy output values of the highest values resulted in more reliable epicentral locations for the earthquakes than the other approaches. In addition, throughout the process, the center-of-gravity method was used as a defuzzification operation.

  8. Modeling Nonlinear Site Response Uncertainty in Broadband Ground Motion Simulations for the Los Angeles Basin

    NASA Astrophysics Data System (ADS)

    Assimaki, D.; Li, W.; Steidl, J. M.; Schmedes, J.

    2007-12-01

    The assessment of strong motion site response is of great significance, both for mitigating seismic hazard and for performing detailed analyses of earthquake source characteristics. There currently exists, however, large degree of uncertainty concerning the mathematical model to be employed for the computationally efficient evaluation of local site effects, and the site investigation program necessary to evaluate the nonlinear input model parameters and ensure cost-effective predictions; and while site response observations may provide critical constraints on interpretation methods, the lack of a statistically significant number of in-situ strong motion records prohibits statistical analyses to be conducted and uncertainties to be quantified based entirely on field data. In this paper, we combine downhole observations and broadband ground motion synthetics for characteristic site conditions the Los Angeles Basin, and investigate the variability in ground motion estimation introduced by the site response assessment methodology. In particular, site-specific regional velocity and attenuation structures are initially compiled using near-surface geotechnical data collected at downhole geotechnical arrays, inverse low-strain velocity and attenuation profiles at these sites obtained by inversion of weak motion records and the crustal velocity structure at the corresponding locations obtained from the Southern California Earthquake Centre Community Velocity Model. Successively, broadband ground motions are simulated by means of a hybrid low/high-frequency finite source model with correlated random parameters for rupture scenaria of weak, medium and large magnitude events (M =3.5-7.5). Observed estimates of site response at the stations of interest are first compared to the ensemble of approximate and incremental nonlinear site response models. Parametric studies are next conducted for each fixed magnitude (fault geometry) scenario by varying the source-to-site distance and

  9. Analysis of post-earthquake landslide activity and geo-environmental effects

    NASA Astrophysics Data System (ADS)

    Tang, Chenxiao; van Westen, Cees; Jetten, Victor

    2014-05-01

    Large earthquakes can cause huge losses to human society, due to ground shaking, fault rupture and due to the high density of co-seismic landslides that can be triggered in mountainous areas. In areas that have been affected by such large earthquakes, the threat of landslides continues also after the earthquake, as the co-seismic landslides may be reactivated by high intensity rainfall events. Earthquakes create Huge amount of landslide materials remain on the slopes, leading to a high frequency of landslides and debris flows after earthquakes which threaten lives and create great difficulties in post-seismic reconstruction in the earthquake-hit regions. Without critical information such as the frequency and magnitude of landslides after a major earthquake, reconstruction planning and hazard mitigation works appear to be difficult. The area hit by Mw 7.9 Wenchuan earthquake in 2008, Sichuan province, China, shows some typical examples of bad reconstruction planning due to lack of information: huge debris flows destroyed several re-constructed settlements. This research aim to analyze the decay in post-seismic landslide activity in areas that have been hit by a major earthquake. The areas hit by the 2008 Wenchuan earthquake will be taken a study area. The study will analyze the factors that control post-earthquake landslide activity through the quantification of the landslide volume changes well as through numerical simulation of their initiation process, to obtain a better understanding of the potential threat of post-earthquake landslide as a basis for mitigation planning. The research will make use of high-resolution stereo satellite images, UAV and Terrestrial Laser Scanning(TLS) to obtain multi-temporal DEM to monitor the change of loose sediments and post-seismic landslide activities. A debris flow initiation model that incorporates the volume of source materials, vegetation re-growth, and intensity-duration of the triggering precipitation, and that evaluates

  10. Responses of a tall building in Los Angeles, California as inferred from local and distant earthquakes

    USGS Publications Warehouse

    Çelebi, Mehmet; Hasan Ulusoy,; Nori Nakata,

    2016-01-01

    Increasing inventory of tall buildings in the United States and elsewhere may be subjected to motions generated by near and far seismic sources that cause long-period effects. Multiple sets of records that exhibited such effects were retrieved from tall buildings in Tokyo and Osaka ~ 350 km and 770 km from the epicenter of the 2011 Tohoku earthquake. In California, very few tall buildings have been instrumented. An instrumented 52-story building in downtown Los Angeles recorded seven local and distant earthquakes. Spectral and system identification methods exhibit significant low frequencies of interest (~0.17 Hz, 0.56 Hz and 1.05 Hz). These frequencies compare well with those computed by transfer functions; however, small variations are observed between the significant low frequencies for each of the seven earthquakes. The torsional and translational frequencies are very close and are coupled. Beating effect is observed in at least two of the seven earthquake data.

  11. Energy Partition and Variability of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kanamori, H.

    2003-12-01

    During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy

  12. Modelling the elements of country vulnerability to earthquake disasters.

    PubMed

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  13. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  14. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  15. Evidence for remotely triggered micro-earthquakes during salt cavern collapse

    NASA Astrophysics Data System (ADS)

    Jousset, P.; Rohmer, J.

    2012-04-01

    Micro-seismicity is a good indicator of spatio-temporal evolution of physical properties of rocks prior to catastrophic events like volcanic eruptions or landslides and may be triggered by a number of causes including dynamic characteristics of processes in play or/and external forces. Micro-earthquake triggering has been in the recent years the subject of intense research and our work contribute to showing further evidence of possible triggering of micro-earthquakes by remote large earthquakes. We show evidence of triggered micro-seismicity in the vicinity of an underground salt cavern prone to collapse by a remote M~7.2 earthquake, which occurred ~12000 kilometres away. We demonstrate the near critical state of the cavern before the collapse by means of 2D axisymmetric elastic finite-element simulations. Pressure was lowered in the cavern by pumping operations of brine out of the cavern. We demonstrate that a very small stress increase would be sufficient to break the overburden. High-dynamic broadband records reveal a remarkable time-correlation between a dramatic increase of the local high-frequency micro-seismicity rate associated with the break of the stiffest layer stabilizing the overburden and the passage of low-frequency remote seismic waves, including body, Love and Rayleigh surface waves. Stress oscillations due to the seismic waves exceeded the strength required for the rupture of the complex media made of brine and rock triggering micro-earthquakes and leading to damage of the overburden and eventually collapse of the salt cavern. The increment of stress necessary for the failure of a Dolomite layer is of the same order or magnitude as the maximum dynamic stress magnitude observed during the passage of the earthquakes waves. On this basis, we discuss the possible contribution of the Love and Rayleigh low-frequency surfaces waves.

  16. Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction

    NASA Astrophysics Data System (ADS)

    Allen, A. A.; Chen, X.

    2017-12-01

    The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes

  17. Development of an Earthquake Impact Scale

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Marano, K. D.; Jaiswal, K. S.

    2009-12-01

    With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt

  18. Challenges to communicate risks of human-caused earthquakes

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2014-12-01

    The awareness of natural hazards has been up-trending in recent years. In particular, this is true for earthquakes, which increase in frequency and magnitude in regions that normally do not experience seismic activity. In fact, one of the major concerns for many communities and businesses is that humans today seem to cause earthquakes due to large-scale shale gas production, dewatering and flooding of mines and deep geothermal power production. Accordingly, without opposing any of these technologies it should be a priority of earth scientists who are researching natural hazards to communicate earthquake risks. This presentation discusses the challenges that earth scientists are facing to properly communicate earthquake risks, in light of the fact that human-caused earthquakes are an environmental change affecting only some communities and businesses. Communication channels may range from research papers, books and class room lectures to outreach events and programs, popular media events or even social media networks.

  19. Site Response for Micro-Zonation from Small Earthquakes

    NASA Astrophysics Data System (ADS)

    Gospe, T. B.; Hutchings, L.; Liou, I. Y. W.; Jarpe, S.

    2017-12-01

    We have developed a method to obtain absolute geologic site response from small earthquakes using inexpensive instrumentation that enables us to perform micro-zonation inexpensively and in a short amount of time. We record small earthquakes (M<3) at several sites simultaneously and perform inversion to obtain actual absolute site response. The key to the inversion is that recordings at several stations from an earthquake have the same moment, source corner frequency and whole path Q effect on their spectra, but have individual Kappa and spectral amplification as a function of frequency. When these source and path effects are removed and corrections for different propagation distances are performed, we are left with actual site response. We develop site response functions from 0.5 to 25.0 Hz. Cities situated near active and dangerous faults experience small earthquakes on a regular basis. We typically record at least ten small earthquakes over time to stabilize the uncertainly. Of course, dynamic soil modeling is necessary to scale our linear site response to non-linear regime for large earthquakes. Our instrumentation is very inexpensive and virtually disposable, and can be placed throughout a city at a high density. Operation only requires turning on a switch, and data processing is automated to minimize human labor. We have installed a test network and implemented our full methodology in upper Napa Valley, California where there is variable geology and nearby rock outcrop sites, and a supply of small earthquakes from the nearby Geysers development area. We test several methbods of obtaining site response. We found that rock sites have a site response of their own and distort the site response estimate based upon spectral ratios with soil sites. Also, rock sites may not even be available near all sites throughout a city. Further, H/V site response estimates from earthquakes are marginally better, but vertical motion also has a site response of its own. H

  20. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  1. Earthquake dating: an application of carbon-14 atom counting.

    PubMed

    Tucker, A B; Woefli, W; Bonani, G; Suter, M

    1983-03-18

    Milligram-sized specimens of detrital charcoal from soil layers associated with prehistoric earthquakes on the Wasatch fault in Utah have been dated by direct atom counting of carbon-14 with a tandem Van de Graaff accelerator. The measured ratios of carbon-14 to carbon-12 correspond to ages of 7800, 8800, and 9000 years with uncertainties of +/- 600 years.

  2. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  3. The earthquake potential of the New Madrid seismic zone

    USGS Publications Warehouse

    Tuttle, Martitia P.; Schweig, Eugene S.; Sims, John D.; Lafferty, Robert H.; Wolf, Lorraine W.; Haynes, Marion L.

    2002-01-01

    The fault system responsible for New Madrid seismicity has generated temporally clustered very large earthquakes in A.D. 900 ± 100 years and A.D. 1450 ± 150 years as well as in 1811–1812. Given the uncertainties in dating liquefaction features, the time between the past three New Madrid events may be as short as 200 years and as long as 800 years, with an average of 500 years. This advance in understanding the Late Holocene history of the New Madrid seismic zone and thus, the contemporary tectonic behavior of the associated fault system was made through studies of hundreds of earthquake-induced liquefaction features at more than 250 sites across the New Madrid region. We have found evidence that prehistoric sand blows, like those that formed during the 1811–1812 earthquakes, are probably compound structures resulting from multiple earthquakes closely clustered in time or earthquake sequences. From the spatial distribution and size of sand blows and their sedimentary units, we infer the source zones and estimate the magnitudes of earthquakes within each sequence and thereby characterize the detailed behavior of the fault system. It appears that fault rupture was complex and that the central branch of the seismic zone produced very large earthquakes during the A.D. 900 and A.D. 1450 events as well as in 1811–1812. On the basis of a minimum recurrence rate of 200 years, we are now entering the period during which the next 1811–1812-type event could occur.

  4. Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities

    USGS Publications Warehouse

    Duross, Christopher; Olig, Susan; Schwartz, David

    2015-01-01

    Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.

  5. Impact-based earthquake alerts with the U.S. Geological Survey's PAGER system: what's next?

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Garcia, D.; So, E.; Hearne, M.

    2012-01-01

    In September 2010, the USGS began publicly releasing earthquake alerts for significant earthquakes around the globe based on estimates of potential casualties and economic losses with its Prompt Assessment of Global Earthquakes for Response (PAGER) system. These estimates significantly enhanced the utility of the USGS PAGER system which had been, since 2006, providing estimated population exposures to specific shaking intensities. Quantifying earthquake impacts and communicating estimated losses (and their uncertainties) to the public, the media, humanitarian, and response communities required a new protocol—necessitating the development of an Earthquake Impact Scale—described herein and now deployed with the PAGER system. After two years of PAGER-based impact alerting, we now review operations, hazard calculations, loss models, alerting protocols, and our success rate for recent (2010-2011) events. This review prompts analyses of the strengths, limitations, opportunities, and pressures, allowing clearer definition of future research and development priorities for the PAGER system.

  6. Uncertainty of streamwater solute fluxes in five contrasting headwater catchments including model uncertainty and natural variability (Invited)

    NASA Astrophysics Data System (ADS)

    Aulenbach, B. T.; Burns, D. A.; Shanley, J. B.; Yanai, R. D.; Bae, K.; Wild, A.; Yang, Y.; Dong, Y.

    2013-12-01

    There are many sources of uncertainty in estimates of streamwater solute flux. Flux is the product of discharge and concentration (summed over time), each of which has measurement uncertainty of its own. Discharge can be measured almost continuously, but concentrations are usually determined from discrete samples, which increases uncertainty dependent on sampling frequency and how concentrations are assigned for the periods between samples. Gaps between samples can be estimated by linear interpolation or by models that that use the relations between concentration and continuously measured or known variables such as discharge, season, temperature, and time. For this project, developed in cooperation with QUEST (Quantifying Uncertainty in Ecosystem Studies), we evaluated uncertainty for three flux estimation methods and three different sampling frequencies (monthly, weekly, and weekly plus event). The constituents investigated were dissolved NO3, Si, SO4, and dissolved organic carbon (DOC), solutes whose concentration dynamics exhibit strongly contrasting behavior. The evaluation was completed for a 10-year period at five small, forested watersheds in Georgia, New Hampshire, New York, Puerto Rico, and Vermont. Concentration regression models were developed for each solute at each of the three sampling frequencies for all five watersheds. Fluxes were then calculated using (1) a linear interpolation approach, (2) a regression-model method, and (3) the composite method - which combines the regression-model method for estimating concentrations and the linear interpolation method for correcting model residuals to the observed sample concentrations. We considered the best estimates of flux to be derived using the composite method at the highest sampling frequencies. We also evaluated the importance of sampling frequency and estimation method on flux estimate uncertainty; flux uncertainty was dependent on the variability characteristics of each solute and varied for

  7. Conditional spectrum computation incorporating multiple causal earthquakes and ground-motion prediction models

    USGS Publications Warehouse

    Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas

    2013-01-01

    The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

  8. Very-long-period volcanic earthquakes beneath Mammoth Mountain, California

    USGS Publications Warehouse

    Hill, D.P.; Dawson, P.; Johnston, M.J.S.; Pitt, A.M.; Biasi, G.; Smith, K.

    2002-01-01

    Detection of three very-long-period (VLP) volcanic earthquakes beneath Mammoth Mountain emphasizes that magmatic processes continue to be active beneath this young, eastern California volcano. These VLP earthquakes, which occured in October 1996 and July and August 2000, appear as bell-shaped pulses with durations of one to two minutes on a nearby borehole dilatometer and on the displacement seismogram from a nearby broadband seismometer. They are accompanied by rapid-fire sequences of high-frequency (HF) earthquakes and several long- period (LP) volcanic earthquakes. The limited VLP data are consistent with a CLVD source at a depth of ???3 km beneath the summit, which we interpret as resulting from a slug of fluid (CO2- saturated magmatic brine or perhaps basaltic magma) moving into a crack.

  9. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth

  10. Tectonic controls on earthquake size distribution and seismicity rate: slab buoyancy and slab bending

    NASA Astrophysics Data System (ADS)

    Nishikawa, T.; Ide, S.

    2014-12-01

    There are clear variations in maximum earthquake magnitude among Earth's subduction zones. These variations have been studied extensively and attributed to differences in tectonic properties in subduction zones, such as relative plate velocity and subducting plate age [Ruff and Kanamori, 1980]. In addition to maximum earthquake magnitude, the seismicity of medium to large earthquakes also differs among subduction zones, such as the b-value (i.e., the slope of the earthquake size distribution) and the frequency of seismic events. However, the casual relationship between the seismicity of medium to large earthquakes and subduction zone tectonics has been unclear. Here we divide Earth's subduction zones into over 100 study regions following Ide [2013] and estimate b-values and the background seismicity rate—the frequency of seismic events excluding aftershocks—for subduction zones worldwide using the maximum likelihood method [Utsu, 1965; Aki, 1965] and the epidemic type aftershock sequence (ETAS) model [Ogata, 1988]. We demonstrate that the b-value varies as a function of subducting plate age and trench depth, and that the background seismicity rate is related to the degree of slab bending at the trench. Large earthquakes tend to occur relatively frequently (lower b-values) in shallower subduction zones with younger slabs, and more earthquakes occur in subduction zones with deeper trench and steeper dip angle. These results suggest that slab buoyancy, which depends on subducting plate age, controls the earthquake size distribution, and that intra-slab faults due to slab bending, which increase with the steepness of the slab dip angle, have influence on the frequency of seismic events, because they produce heterogeneity in plate coupling and efficiently inject fluid to elevate pore fluid pressure on the plate interface. This study reveals tectonic factors that control earthquake size distribution and seismicity rate, and these relationships between seismicity and

  11. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  12. Deep low-frequency earthquake activities during the failed magmatic eruptions of Mts. Iwate and Fuji, Japan

    NASA Astrophysics Data System (ADS)

    Nakamichi, H.; Hamaguchi, H.; Ukawa, M.; Tanaka, S.; Ueki, S.; Nishimura, T.

    2008-12-01

    I review deep low-frequency earthquake (DLF) activities during the failed magmatic eruptions of Mts. Iwate and Fuji, Japan. Volcanic unrests at Mts. Iwate and Fuji were observed in 1998-1999 and 2000-2001, respectively. Several hundred DLFs occurred during the unrest at Mt. Iwate; the number of DLFs in a normal year is less than or equal to 10. The DLF activity at Mt. Fuji increased sharply during the period from September 2000 to May 2001. The frequency of DLFs at Mt. Fuji during the DLF swarm was 20 times higher than that during normal activity. The DLFs of Mts. Iwate and Fuji show non-DC source mechanisms and suggest fluid motion at the focal regions. The DLF hypocenters of Mt. Fuji defined an ellipsoid with a diameter of 5 km; their focal depths are 11-16 km. The ellipsoid was centered 3 km northeast of the summit, and its major axis was directed in the northwest-southeast direction. The center of the ellipsoid gradually migrated upward, and 2-3 km in the northwest direction during 1998-2001. The migration of the DLFs reflects the volcanic fluid migration associated with a northwest-southeast-oriented dike beneath Mt. Fuji. The DLFs of Mt. Iwate were located at intermediate depths (5-12 km) beneath the summit and at deep depths (31-37 km) in the regions located 10 km south and 10 km northeast of the summit. In April 1998, the frequency of DLFs increased five days before an increase in the occurrence of shallow volcanic earthquakes at Mt. Iwate. Hypocenter migration of the DLFs at intermediate depths was observed from April 1998 to September 1998. New fumarole activity in the western region of Mt. Iwate commenced in 1999. These observations indicate that DLFs at Mts. Fuji and Iwate have common features in their activities and source mechanisms. But, shallow volcanic activities at these two volcanoes were much different: strong shallow seismic activity and volcano inflations as well as a new formation of fumarolic area were observed at Mt. Iwate, while such

  13. Identification of site frequencies from building records

    USGS Publications Warehouse

    Celebi, M.

    2003-01-01

    A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.

  14. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    ground motions obtained using the EGF method agree well with the observed motions in terms of acceleration, velocity, and displacement within the frequency range of 0.3-10 Hz. These findings indicate that the 2016 Kumamoto earthquake is a standard event that follows the scaling relationship of crustal earthquakes in Japan.

  15. Practical Applications for Earthquake Scenarios Using ShakeMap

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, B.; Quitoriano, V.; Goltz, J.

    2001-12-01

    In planning and coordinating emergency response, utilities, local government, and other organizations are best served by conducting training exercises based on realistic earthquake situations-ones that they are most likely to face. Scenario earthquakes can fill this role; they can be generated for any geologically plausible earthquake or for actual historic earthquakes. ShakeMap Web pages now display selected earthquake scenarios (www.trinet.org/shake/archive/scenario/html) and more events will be added as they are requested and produced. We will discuss the methodology and provide practical examples where these scenarios are used directly for risk reduction. Given a selected event, we have developed tools to make it relatively easy to generate a ShakeMap earthquake scenario using the following steps: 1) Assume a particular fault or fault segment will (or did) rupture over a certain length, 2) Determine the magnitude of the earthquake based on assumed rupture dimensions, 3) Estimate the ground shaking at all locations in the chosen area around the fault, and 4) Represent these motions visually by producing ShakeMaps and generating ground motion input for loss estimation modeling (e.g., FEMA's HAZUS). At present, ground motions are estimated using empirical attenuation relationships to estimate peak ground motions on rock conditions. We then correct the amplitude at that location based on the local site soil (NEHRP) conditions as we do in the general ShakeMap interpolation scheme. Finiteness is included explicitly, but directivity enters only through the empirical relations. Although current ShakeMap earthquake scenarios are empirically based, substantial improvements in numerical ground motion modeling have been made in recent years. However, loss estimation tools, HAZUS for example, typically require relatively high frequency (3 Hz) input for predicting losses, above the range of frequencies successfully modeled to date. Achieving full-synthetic ground motion

  16. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  17. Performance test of an automated moment tensor determination system for the future "Tokai" earthquake

    NASA Astrophysics Data System (ADS)

    Fukuyama, E.; Dreger, D. S.

    2000-06-01

    We have investigated how the automated moment tensor determination (AMTD) system using the FREESIA/KIBAN broadband network is likely to behave during a future large earthquake. Because we do not have enough experience with a large (M >8) nearby earthquake, we computed synthetic waveforms for such an event by assuming the geometrical configuration of the anticipated Tokai earthquake and several fault rupture scenarios. Using this synthetic data set, we examined the behavior of the AMTD system to learn how to prepare for such an event. For our synthetic Tokai event data we assume its focal mechanism, fault dimension, and scalar seismic moment. We also assume a circular rupture propagation with constant rupture velocity and dislocation rise time. Both uniform and heterogeneous slip models are tested. The results show that performance depends on both the hypocentral location (i.e. unilateral vs. bilateral) and the degree of heterogeneity of slip. In the tests that we have performed the rupture directivity appears to be more important than slip heterogeneity. We find that for such large earthquakes it is necessary to use stations at distances greater than 600 km and frequencies between 0.005 to 0.02 Hz to maintain a point-source assumption and to recover the full scalar seismic moment and radiation pattern. In order to confirm the result of the synthetic test, we have analyzed the 1993 Hokkaido Nansei-oki (MJ7.8) and the 1995 Kobe (MJ7.2) earthquakes by using observed broadband waveforms. For the Kobe earthquake we successfully recovered the moment tensor by using the routinely used frequency band (0.01-0.05 Hz displacements). However, we failed to estimate a correct solution for the Hokkaido Nansei-oki earthquake by using the same routine frequency band. In this case, we had to use the frequencies between 0.005 to 0.02 Hz to recover the moment tensor, confirming the validity of the synthetic test result for the Tokai earthquake.

  18. Spatial and Temporal Evolution of Earthquake Dynamics: Case Study of the Mw 8.3 Illapel Earthquake, Chile

    NASA Astrophysics Data System (ADS)

    Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian

    2018-01-01

    We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.

  19. Daytime dependence of disturbances of ionospheric Es-layers connected to earthquakes

    NASA Astrophysics Data System (ADS)

    Liperovskaya, E. V.; Liperovsky, A. V.; Meister, C.-V.; Silina, A. S.

    2012-04-01

    In the present work variations of the semi-transparency of the sporadic E-layer of the ionosphere due to seismic activities are studied. The semi-transparency Q is determined by the blanketing frequency fbEs and the characteristic frequency foEs, Q = (foEs - fbEs)/fbEs. At low values of the blanketing frequency fbEs, the critical frequency foEs does not describe the maximum ionisation density of the Es-layer, as the critical frequencies of regular ionospheric layers (e.g. foF2) do, but it describes the occurrence of small-scall (tenths of meters) inhomogeneities of the ionisation density along the vertical in the layer. The maximum ionisation density of the sporadic layer is proportional to the square of fbEs. In the case of vertical ionospheric sounding, the sporadic layer becomes transparent for signals with frequencies larger than fbEs. Investigations showed that about three days before an earthquake an increase of the semi-transparency interval is observed during sunset and sunrise. In the present work, analogous results are found for data of the vertical sounding stations "Tokyo" and "Petropavlovsk-Kamchatsky". Using the method of superposition of epoches, more than 50 earthquakes with magnitudes M > 5, depths h < 40 km, and distances between the station and the epicenter R < 300 km are considered in case of the vertical sounding station "Tokyo". More than 20 earthquakes with such parameters were analysed in case of the station "Petropavlovsk-Kamchatsky". Days with strong geomagnetic activity were excluded from the analysis. According to the station "Petropavlovsk-Kamchatsky" about 1-3 days before earthquakes, an increase of Es-spread is observed a few hours before midnight. This increase is a sign of large-scale inhomogeneities in the sporadic layers.

  20. Reducing uncertainty with flood frequency analysis: The contribution of paleoflood and historical flood information

    NASA Astrophysics Data System (ADS)

    Lam, Daryl; Thompson, Chris; Croke, Jacky; Sharma, Ashneel; Macklin, Mark

    2017-03-01

    Using a combination of stream gauge, historical, and paleoflood records to extend extreme flood records has proven to be useful in improving flood frequency analysis (FFA). The approach has typically been applied in localities with long historical records and/or suitable river settings for paleoflood reconstruction from slack-water deposits (SWDs). However, many regions around the world have neither extensive historical information nor bedrock gorges suitable for SWDs preservation and paleoflood reconstruction. This study from subtropical Australia demonstrates that confined, semialluvial channels such as macrochannels provide relatively stable boundaries over the 1000-2000 year time period and the preserved SWDs enabled paleoflood reconstruction and their incorporation into FFA. FFA for three sites in subtropical Australia with the integration of historical and paleoflood data using Bayesian Inference methods showed a significant reduction in uncertainty associated with the estimated discharge of a flood quantile. Uncertainty associated with estimated discharge for the 1% Annual Exceedance Probability (AEP) flood is reduced by more than 50%. In addition, sensitivity analysis of possible within-channel boundary changes shows that FFA is not significantly affected by any associated changes in channel capacity. Therefore, a greater range of channel types may be used for reliable paleoflood reconstruction by evaluating the stability of inset alluvial units, thereby increasing the quantity of temporal data available for FFA. The reduction in uncertainty, particularly in the prediction of the ≤1% AEP design flood, will improve flood risk planning and management in regions with limited temporal flood data.

  1. Increased Earthquake Rates in the Central and Eastern US Portend Higher Earthquake Hazards

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Rubinstein, J. L.; Ellsworth, W. L.; Mueller, C. S.; Michael, A. J.; McGarr, A.; Petersen, M. D.; Weingarten, M.; Holland, A. A.

    2014-12-01

    Since 2009 the central and eastern United States has experienced an unprecedented increase in the rate of M≥3 earthquakes that is unlikely to be due to natural variation. Where the rates have increased so has the seismic hazard, making it important to understand these changes. Areas with significant seismicity increases are limited to areas where oil and gas production take place. By far the largest contributor to the seismicity increase is Oklahoma, where recent studies suggest that these rate changes may be due to fluid injection (e.g., Keranen et al., Geology, 2013; Science, 2014). Moreover, the area of increased seismicity in northern Oklahoma that began in 2013 coincides with the Mississippi Lime play, where well completions greatly increased the year before the seismicity increase. This suggests a link to oil and gas production either directly or from the disposal of significant amounts of produced water within the play. For the purpose of assessing the hazard due to these earthquakes, should they be treated differently from natural earthquakes? Previous studies suggest that induced seismicity may differ from natural seismicity in clustering characteristics or frequency-magnitude distributions (e.g., Bachmann et al., GJI, 2011; Llenos and Michael, BSSA, 2013). These differences could affect time-independent hazard computations, which typically assume that clustering and size distribution remain constant. In Oklahoma, as well as other areas of suspected induced seismicity, we find that earthquakes since 2009 tend to be considerably more clustered in space and time than before 2009. However differences between various regional and national catalogs leave unclear whether there are significant changes in magnitude distribution. Whether they are due to natural or industrial causes, the increased earthquake rates in these areas could increase the hazard in ways that are not accounted for in current hazard assessment practice. Clearly the possibility of induced

  2. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human

  3. Study on safety level of RC beam bridges under earthquake

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia

    2017-08-01

    This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.

  4. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  5. GPS detection of ionospheric perturbations following the January 17, 1994, northridge earthquake

    NASA Technical Reports Server (NTRS)

    Calais, Eric; Minster, J. Bernard

    1995-01-01

    Sources such as atmospheric or buried explosions and shallow earthquakes producing strong vertical ground displacements produce pressure waves that propagate at infrasonic speeds in the atmosphere. At ionospheric altitudes low frequency acoustic waves are coupled to ionispheric gravity waves and induce variations in the ionoispheric electron density. Global Positioning System (GPS) data recorded in Southern California were used to compute ionospheric electron content time series for several days preceding and following the January 17, 1994, M(sub w) = 6.7 Northridge earthquake. An anomalous signal beginning several minutes after the earthquake with time delays that increase with distance from the epicenter was observed. The signal frequency and phase velocity are consistent with results from numerical models of atmospheric-ionospheric acoustic-gravity waves excited by seismic sources as well as previous electromagnetic sounding results. It is believed that these perturbations are caused by the ionospheric response to the strong ground displacement associated with the Northridge earthquake.

  6. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  7. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  8. Earthquakes triggered by silent slip events on Kīlauea volcano, Hawaii

    USGS Publications Warehouse

    Segall, Paul; Desmarais, Emily K.; Shelly, David; Miklius, Asta; Cervelli, Peter F.

    2006-01-01

    Slow-slip events, or ‘silent earthquakes’, have recently been discovered in a number of subduction zones including the Nankai trough1, 2, 3 in Japan, Cascadia4, 5, and Guerrero6 in Mexico, but the depths of these events have been difficult to determine from surface deformation measurements. Although it is assumed that these silent earthquakes are located along the plate megathrust, this has not been proved. Slow slip in some subduction zones is associated with non-volcanic tremor7, 8, but tremor is difficult to locate and may be distributed over a broad depth range9. Except for some events on the San Andreas fault10, slow-slip events have not yet been associated with high-frequency earthquakes, which are easily located. Here we report on swarms of high-frequency earthquakes that accompany otherwise silent slips on Kīlauea volcano, Hawaii. For the most energetic event, in January 2005, the slow slip began before the increase in seismicity. The temporal evolution of earthquakes is well explained by increased stressing caused by slow slip, implying that the earthquakes are triggered. The earthquakes, located at depths of 7–8 km, constrain the slow slip to be at comparable depths, because they must fall in zones of positive Coulomb stress change. Triggered earthquakes accompanying slow-slip events elsewhere might go undetected if background seismicity rates are low. Detection of such events would help constrain the depth of slow slip, and could lead to a method for quantifying the increased hazard during slow-slip events, because triggered events have the potential to grow into destructive earthquakes.

  9. Earthquake Ground Motion Simulations in the Central United States

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Boyd, O. S.; Hartzell, S.; Williams, R. A.

    2010-12-01

    The Central United States (CUS) includes two of the major seismic zones east of the Rockies: the New Madrid and Wabash Valley Seismic Zones. The winter 1811-1812 New Madrid Seismic Zone (NMSZ) events were the largest intraplate sequence ever recorded in the United States. Together with their aftershocks, these earthquakes produced large areas of liquefaction, new lakes, and landslides in the region. Seismicity in the early 1800’s was dominated by the NMSZ activity, although three low magnitude 5 earthquakes occurred in the last 40 years in the Wabash Valley Seismic Zone (WVSZ). The population and infrastructure of the CUS have drastically changed from that of the early nineteenth century, and a large earthquake would now cause significant casualties and economic losses within the country’s heartland. In this study we present three sets of numerical simulations depicting earthquakes in the region. These hypothetical ruptures are located on the Reelfoot fault and the southern axial arm of the NMSZ and in the WVSZ. Our broad-band synthetic ground motions are calculated following the Liu et al. (2006) hybrid method. Using a finite element solver we calculate low frequency ground motion (< 1 Hz) which accounts for the heterogeneity and low velocity soils of the region by using a recently developed seismic velocity model (CUSVM1) and a minimum shear wave velocity of 300 m/s. The broad-band ground motions are then generated by combining high frequency synthetics computed in a 1D velocity model with the low frequency motions at a crossover frequency of 1 Hz. We primarily discuss the basin effects produced by the Mississippi embayment and investigate the effects of hypocentral location and slip distribution on ground motions in densely populated areas within the CUS.

  10. Qualitative analysis of ionospheric disorders in Solok earthquake (March 6, 2007) viewed from anomalous critical frequency of layer F (f0F2) and genesis spread F

    NASA Astrophysics Data System (ADS)

    Pujiastuti, D.; Daniati, S.; Taufiqurrahman, E.; Mustafa, B.; Ednofri

    2018-03-01

    A qualitative analysis has been conducted by comparing the critical frequency anomalies of layer F (f0F2) and Spread F events to see the correlation with seismic activity before the Solok earthquake (March 6, 2007) in West Sumatra. The ionospherics data used was taken using the FMCW ionosonde at LAPAN SPD Kototabang, Palupuah, West Sumatra. The process of ionogramme scaling is done first to get the daily value of f0F2. The value of f0F2 is then compared with its monthly median to see the daily variations that appear. Anomalies of f0F2 and Spread F events were observed from February 20, 2007 to March 6, 2007. The presence of f0F2 anomalies was the negative deviation and the presence of Spread F before earthquake events were recommended as Solok earthquake precursors as they occurred when geomagneticsics and solar activities were normal.

  11. Earthquake Loss Scenarios: Warnings about the Extent of Disasters

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Tolis, S.; Rosset, P.

    2016-12-01

    It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek

  12. Tremor, remote triggering and earthquake cycle

    NASA Astrophysics Data System (ADS)

    Peng, Z.

    2012-12-01

    Deep tectonic tremor and episodic slow-slip events have been observed at major plate-boundary faults around the Pacific Rim. These events have much longer source durations than regular earthquakes, and are generally located near or below the seismogenic zone where regular earthquakes occur. Tremor and slow-slip events appear to be extremely stress sensitive, and could be instantaneously triggered by distant earthquakes and solid earth tides. However, many important questions remain open. For example, it is still not clear what are the necessary conditions for tremor generation, and how remote triggering could affect large earthquake cycle. Here I report a global search of tremor triggered by recent large teleseismic earthquakes. We mainly focus on major subduction zones around the Pacific Rim. These include the southwest and northeast Japan subduction zones, the Hikurangi subduction zone in New Zealand, the Cascadia subduction zone, and the major subduction zones in Central and South America. In addition, we examine major strike-slip faults around the Caribbean plate, the Queen Charlotte fault in northern Pacific Northwest Coast, and the San Andreas fault system in California. In each place, we first identify triggered tremor as a high-frequency non-impulsive signal that is in phase with the large-amplitude teleseismic waves. We also calculate the dynamic stress and check the triggering relationship with the Love and Rayleigh waves. Finally, we calculate the triggering potential with the local fault orientation and surface-wave incident angles. Our results suggest that tremor exists at many plate-boundary faults in different tectonic environments, and could be triggered by dynamic stress as low as a few kPas. In addition, we summarize recent observations of slow-slip events and earthquake swarms triggered by large distant earthquakes. Finally, we propose several mechanisms that could explain apparent clustering of large earthquakes around the world.

  13. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  14. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  15. Intensity, magnitude, location and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, Walter; Hough, Susan; Martin, Stacey; Bilham, Roger

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earthquakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental- with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earthquakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  16. Intensity, magnitude, location, and attenuation in India for felt earthquakes since 1762

    USGS Publications Warehouse

    Szeliga, W.; Hough, S.; Martin, S.; Bilham, R.

    2010-01-01

    A comprehensive, consistently interpreted new catalog of felt intensities for India (Martin and Szeliga, 2010, this issue) includes intensities for 570 earth-quakes; instrumental magnitudes and locations are available for 100 of these events. We use the intensity values for 29 of the instrumentally recorded events to develop new intensity versus attenuation relations for the Indian subcontinent and the Himalayan region. We then use these relations to determine the locations and magnitudes of 234 historical events, using the method of Bakun and Wentworth (1997). For the remaining 336 events, intensity distributions are too sparse to determine magnitude or location. We evaluate magnitude and location accuracy of newly located events by comparing the instrumental-with the intensity-derived location for 29 calibration events, for which more than 15 intensity observations are available. With few exceptions, most intensity-derived locations lie within a fault length of the instrumentally determined location. For events in which the azimuthal distribution of intensities is limited, we conclude that the formal error bounds from the regression of Bakun and Wentworth (1997) do not reflect the true uncertainties. We also find that the regression underestimates the uncertainties of the location and magnitude of the 1819 Allah Bund earthquake, for which a location has been inferred from mapped surface deformation. Comparing our inferred attenuation relations to those developed for other regions, we find that attenuation for Himalayan events is comparable to intensity attenuation in California (Bakun and Wentworth, 1997), while intensity attenuation for cratonic events is higher than intensity attenuation reported for central/eastern North America (Bakun et al., 2003). Further, we present evidence that intensities of intraplate earth-quakes have a nonlinear dependence on magnitude such that attenuation relations based largely on small-to-moderate earthquakes may significantly

  17. On the dynamical behaviour of low-frequency earthquake swarms prior to a dome collapse of Soufrière Hill volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Neuberg, J. W.

    2009-03-01

    A series of low-frequency earthquake swarms prior to a dome collapse on Soufrière Hills volcano, Montserrat, are investigated with the emphasis on event rate and amplitude behaviour. In a single swarm, the amplitudes of consecutive events tend to increase with time, while the rate of event occurrence accelerates initially and then decelerates toward the end of the swarm. However, when consecutive swarms are considered, the average event rates seem to follow the material failure law, and the time of the dome collapse can be successfully estimated using the inverse event rate. These patterns in amplitude and event rate are interpreted as fluctuations in magma ascent velocity, which result in both the generation of low-frequency events as well as cyclic ground deformation accompanying the swarm activity.

  18. Measuring earthquakes from optical satellite images.

    PubMed

    Van Puymbroeck, N; Michel, R; Binet, R; Avouac, J P; Taboury, J

    2000-07-10

    Système pour l'Observation de la Terre images are used to map ground displacements induced by earthquakes. Deformations (offsets) induced by stereoscopic effect and roll, pitch, and yaw of satellite and detector artifacts are estimated and compensated. Images are then resampled in a cartographic projection with a low-bias interpolator. A subpixel correlator in the Fourier domain provides two-dimensional offset maps with independent measurements approximately every 160 m. Biases on offsets are compensated from calibration. High-frequency noise (0.125 m(-1)) is approximately 0.01 pixels. Low-frequency noise (lower than 0.001 m(-1)) exceeds 0.2 pixels and is partially compensated from modeling. Applied to the Landers earthquake, measurements show the fault with an accuracy of a few tens of meters and yields displacement on the fault with an accuracy of better than 20 cm. Comparison with a model derived from geodetic data shows that offsets bring new insights into the faulting process.

  19. Premonitory slip and tidal triggering of earthquakes

    USGS Publications Warehouse

    Lockner, D.A.; Beeler, N.M.

    1999-01-01

    Earth tides. Triggered seismicity has been reported resulting from the passage of surface waves excited by the Landers earthquake. These transient waves had measured amplitudes in excess of 0.1 MPa at frequencies of 0.05 to 0.2 Hz in regions of notable seismicity increase. Similar stress oscillations in our laboratory experiments produced strongly correlated stick-slip events. We suggest that seemingly inconsistent natural observations of triggered seismicity and absence of tidal triggering indicate that failure is amplitude and frequency dependent. This is the expected result if, as in our laboratory experiments, the rheology of the Earth's crust permits delayed failure.

  20. Plenty of Deep Long-Period Earthquakes Beneath Cascade Volcanoes

    NASA Astrophysics Data System (ADS)

    Nichols, M. L.; Malone, S. D.; Moran, S. C.; Thelen, W. A.; Vidale, J. E.

    2009-12-01

    The Pacific Northwest Seismic Network (PNSN) records and locates earthquakes within Washington and Oregon, including those occurring at 10 Cascade volcanic centers. In an earlier study (Malone and Moran, EOS 1997), a total of 11 deep long-period (DLP) earthquakes were reported beneath 3 Washington volcanoes. They are characterized by emergent P- and S- arrivals, long and ringing codas, and contain most of their energy below 5 Hz. DLP earthquakes are significant because they have been observed to occur prior to or in association with eruptions at several volcanoes, and as a result are inferred to represent movement of deep-seated magma and associated fluids in the mid-to-lower crust. To more thoroughly characterize DLP occurrence in Washington and Oregon, we employed a two-step algorithm to systematically search the PNSN’s earthquake catalogue for DLP events occurring between 1980 and 2008. In the first step we applied a spectral ratio test to the demeaned and tapered triggered event waveforms to distinguish long-period events from the more common higher frequency volcano-tectonic and regional tectonic earthquakes. In the second step we visually analyzed waveforms of the flagged long-period events to distinguish DLP earthquakes from long-period rockfalls, explosions, shallow low-frequency events, and glacier quakes. We identified 56 DLP earthquakes beneath 7 Cascade volcanic centers. Of these, 31 occurred at Mount Baker, where the background flux of magmatic gases is greater than at the other volcanoes in our study. The other 6 volcanoes with DLPs (counts in parentheses) are Glacier Peak (5), Mount Rainier (9), Mount St. Helens (9), Mount Hood (1), Three Sisters (1), and Crater Lake (1). No DLP events were identified beneath Mount Adams, Mount Jefferson, or Newberry Volcano. The events are 10-40 km deep and have an average magnitude of around 1.5 (Mc), with both the largest and deepest DLPs occurring beneath Mount Baker. Cascade DLP earthquakes occur mostly as

  1. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  2. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  3. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  4. Sensitivity to Regional Earthquake Triggering and Magnitude-Frequency Characteristics of Microseismicity Detected via Matched-Filter Analysis, Central Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Boese, C. M.; Townend, J.; Chamberlain, C. J.; Warren-Smith, E.

    2016-12-01

    Microseismicity recorded since 2008 by the Southern Alps Microseismicity Borehole Array (SAMBA) and other predominantly short-period seismic networks deployed in the central Southern Alps, New Zealand, reveals distinctive patterns of triggering in response to regional seismicity (magnitudes larger than 5, epicentral distances of 100-500 km). Using matched-filter detection methods implemented in the EQcorrscan package (Chamberlain et al., in prep.), we analyze microseismicity occurring in several geographically distinct swarms in order to examine the responses of specific microearthquake sources to earthquakes of different sizes occurring at different distances and azimuths. The swarms exhibit complex responses to regional seismicity which reveal that microearthquake triggering in these cases involves a combination of extrinsic factors (related to the dynamic stresses produced by the regional earthquake) and intrinsic factors (controlled by the local state of stress and possibly by hydrogeological processes). We find also that the microearthquakes detected by individual templates have Gutenberg-Richter magnitude-frequency characteristics. Since the detected events, by design, have very similar hypocentres and focal mechanisms, the observed scaling pertains to a restricted set of fault planes.

  5. Nonlinear site response in medium magnitude earthquakes near Parkfield, California

    USGS Publications Warehouse

    Rubinstein, Justin L.

    2011-01-01

    Careful analysis of strong-motion recordings of 13 medium magnitude earthquakes (3.7 ≤ M ≤ 6.5) in the Parkfield, California, area shows that very modest levels of shaking (approximately 3.5% of the acceleration of gravity) can produce observable changes in site response. Specifically, I observe a drop and subsequent recovery of the resonant frequency at sites that are part of the USGS Parkfield dense seismograph array (UPSAR) and Turkey Flat array. While further work is necessary to fully eliminate other models, given that these frequency shifts correlate with the strength of shaking at the Turkey Flat array and only appear for the strongest shaking levels at UPSAR, the most plausible explanation for them is that they are a result of nonlinear site response. Assuming this to be true, the observation of nonlinear site response in small (M M 6.5 San Simeon earthquake and the 2004 M 6 Parkfield earthquake).

  6. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  7. Spatiotemporal distribution of low-frequency earthquakes in Southwest Japan: Evidence for fluid migration and magmatic activity

    NASA Astrophysics Data System (ADS)

    Yu, Zhiteng; Zhao, Dapeng; Niu, Xiongwei; Li, Jiabiao

    2018-01-01

    Low-frequency earthquakes (LFEs) in the lower crust and uppermost mantle are widely observed in Southwest Japan, and they occur not only along the subducting Philippine Sea (PHS) slab interface but also beneath active arc volcanoes. The volcanic LFEs are still not well understood because of their limited quantities and less reliable hypocenter locations. In this work, seismic tomography is used to determine detailed three-dimensional (3-D) P- and S-wave velocity (Vp and Vs) models of the crust and upper mantle beneath Southwest Japan, and then the obtained 3-D Vp and Vs models are used to relocate the volcanic LFEs precisely. The results show that the volcanic LFEs can be classified into two types: pipe-like and swarm-like LFEs, and both of them are located in or around zones of low-velocity and high-Poisson's ratio anomalies in the crust and uppermost mantle beneath the active volcanoes. The pipe-like LFEs may be related to the fluid migration from the lower crust or the uppermost mantle, whereas the swarm-like LFEs may be related to local magmatic activities or small magma chambers. The number of LFEs sometimes increases sharply before or after a nearby large crustal earthquake which may cause cracks and fluid migration. The spatiotemporal distribution of the LFEs may indicate the track of migrating fluids. As compared with the tectonic LFEs along the PHS slab interface, the volcanic LFEs are more sensitive to fluid migration and local magmatic activities. High pore pressures play an important role in triggering both types of LFEs in Southwest Japan.

  8. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  9. Source processes of strong earthquakes in the North Tien-Shan region

    NASA Astrophysics Data System (ADS)

    Kulikova, G.; Krueger, F.

    2013-12-01

    Tien-Shan region attracts attention of scientists worldwide due to its complexity and tectonic uniqueness. A series of very strong destructive earthquakes occurred in Tien-Shan at the turn of XIX and XX centuries. Such large intraplate earthquakes are rare in seismology, which increases the interest in the Tien-Shan region. The presented study focuses on the source processes of large earthquakes in Tien-Shan. The amount of seismic data is limited for those early times. In 1889, when a major earthquake has occurred in Tien-Shan, seismic instruments were installed in very few locations in the world and these analog records did not survive till nowadays. Although around a hundred seismic stations were operating at the beginning of XIX century worldwide, it is not always possible to get high quality analog seismograms. Digitizing seismograms is a very important step in the work with analog seismic records. While working with historical seismic records one has to take into account all the aspects and uncertainties of manual digitizing and the lack of accurate timing and instrument characteristics. In this study, we develop an easy-to-handle and fast digitization program on the basis of already existing software which allows to speed up digitizing process and to account for all the recoding system uncertainties. Owing to the lack of absolute timing for the historical earthquakes (due to the absence of a universal clock at that time), we used time differences between P and S phases to relocate the earthquakes in North Tien-Shan and the body-wave amplitudes to estimate their magnitudes. Combining our results with geological data, five earthquakes in North Tien-Shan were precisely relocated. The digitizing of records can introduce steps into the seismograms which makes restitution (removal of instrument response) undesirable. To avoid the restitution, we simulated historic seismograph recordings with given values for damping and free period of the respective instrument and

  10. Composite Earthquake Catalog of the Yellow Sea for Seismic Hazard Studies

    NASA Astrophysics Data System (ADS)

    Kang, S. Y.; Kim, K. H.; LI, Z.; Hao, T.

    2017-12-01

    information in the Yellow Sea composite earthquake catalog (YComCat). Since earthquake catalog plays critical role in the seismic hazard assessment, YComCat provides improved input to reduce uncertainties in the seismic hazard estimations.

  11. Uncertainty estimation of Intensity-Duration-Frequency relationships: A regional analysis

    NASA Astrophysics Data System (ADS)

    Mélèse, Victor; Blanchet, Juliette; Molinié, Gilles

    2018-03-01

    We propose in this article a regional study of uncertainties in IDF curves derived from point-rainfall maxima. We develop two generalized extreme value models based on the simple scaling assumption, first in the frequentist framework and second in the Bayesian framework. Within the frequentist framework, uncertainties are obtained i) from the Gaussian density stemming from the asymptotic normality theorem of the maximum likelihood and ii) with a bootstrap procedure. Within the Bayesian framework, uncertainties are obtained from the posterior densities. We confront these two frameworks on the same database covering a large region of 100, 000 km2 in southern France with contrasted rainfall regime, in order to be able to draw conclusion that are not specific to the data. The two frameworks are applied to 405 hourly stations with data back to the 1980's, accumulated in the range 3 h-120 h. We show that i) the Bayesian framework is more robust than the frequentist one to the starting point of the estimation procedure, ii) the posterior and the bootstrap densities are able to better adjust uncertainty estimation to the data than the Gaussian density, and iii) the bootstrap density give unreasonable confidence intervals, in particular for return levels associated to large return period. Therefore our recommendation goes towards the use of the Bayesian framework to compute uncertainty.

  12. Loss estimates for a Puente Hills blind-thrust earthquake in Los Angeles, California

    USGS Publications Warehouse

    Field, E.H.; Seligson, H.A.; Gupta, N.; Gupta, V.; Jordan, T.H.; Campbell, K.W.

    2005-01-01

    Based on OpenSHA and HAZUS-MH, we present loss estimates for an earthquake rupture on the recently identified Puente Hills blind-thrust fault beneath Los Angeles. Given a range of possible magnitudes and ground motion models, and presuming a full fault rupture, we estimate the total economic loss to be between $82 and $252 billion. This range is not only considerably higher than a previous estimate of $69 billion, but also implies the event would be the costliest disaster in U.S. history. The analysis has also provided the following predictions: 3,000-18,000 fatalities, 142,000-735,000 displaced households, 42,000-211,000 in need of short-term public shelter, and 30,000-99,000 tons of debris generated. Finally, we show that the choice of ground motion model can be more influential than the earthquake magnitude, and that reducing this epistemic uncertainty (e.g., via model improvement and/or rejection) could reduce the uncertainty of the loss estimates by up to a factor of two. We note that a full Puente Hills fault rupture is a rare event (once every ???3,000 years), and that other seismic sources pose significant risk as well. ?? 2005, Earthquake Engineering Research Institute.

  13. A seismic hazard uncertainty analysis for the New Madrid seismic zone

    USGS Publications Warehouse

    Cramer, C.H.

    2001-01-01

    A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.

  14. Intraplate Earthquakes and Deformation within the East Antarctic Craton

    NASA Astrophysics Data System (ADS)

    Lough, A. C.; Wiens, D.; Nyblade, A.

    2017-12-01

    The apparent lack of tectonic seismicity within Antarctica has long been discussed. Explanations have ranged from a lack of intraplate stress due to the surrounding spreading ridges and low absolute plate velocity (Sykes, 1978), to the weight of ice sheets increasing the vertical normal stress (Johnston, 1987). The 26 station GAMSEIS/AGAP array deployed in East Antarctica from late 2008 to early 2010 provides the first opportunity to study the intraplate seismicity of the Antarctic interior using regional data. Here we report 27 intraplate tectonic earthquakes that occurred during 2009. Depth determination together with their corresponding uncertainty estimates, show that most events originate in the shallow to middle crust, indicating a tectonic and not a cryoseismic origin. The earthquakes are primarily located beneath linear alignments of basins adjacent to the Gamburtsev Subglacial Mountains (GSM) that have been denoted as the East Antarctic rift system (Ferraccioli et al, 2011). The geophysical properties of the `rift' system contrast sharply with those of the GSM and Vostok Subglacial Highlands on either side. Crustal thickness, seismic velocity, and gravity anomalies all indicate large lateral variation in lithospheric properties. We propose the events outline an ancient continental rift, a terrain boundary feature, or a combination of the two where rifting exploited pre-existing weakness. It is natural to draw parallels between East Antarctica and the St. Lawrence depression where rifting and a collisional suture focus intraplate earthquakes within a craton (Schulte and Mooney, 2005). We quantify the East Antarctic seismicity by developing a frequency-magnitude relation, constraining the lower magnitudes with the 2009 results and the larger magnitudes with 1982-2012 teleseismic seismicity. East Antarctica and the Canadian Shield show statistically indistinguishable b-values (near 1) and seismicity rates as expressed as the number of events with mb > 4 per

  15. Signatures of the seismic source in EMD-based characterization of the 1994 Northridge, California, earthquake recordings

    USGS Publications Warehouse

    Zhang, R.R.; Ma, S.; Hartzell, S.

    2003-01-01

    In this article we use empirical mode decomposition (EMD) to characterize the 1994 Northridge, California, earthquake records and investigate the signatures carried over from the source rupture process. Comparison of the current study results with existing source inverse solutions that use traditional data processing suggests that the EMD-based characterization contains information that sheds light on aspects of the earthquake rupture process. We first summarize the fundamentals of the EMD and illustrate its features through the analysis of a hypothetical and a real record. Typically, the Northridge strong-motion records are decomposed into eight or nine intrinsic mode functions (IMF's), each of which emphasizes a different oscillation mode with different amplitude and frequency content. The first IMF has the highest-frequency content; frequency content decreases with an increase in IMF component. With the aid of a finite-fault inversion method, we then examine aspects of the source of the 1994 Northridge earthquake that are reflected in the second to fifth IMF components. This study shows that the second IMF is predominantly wave motion generated near the hypocenter, with high-frequency content that might be related to a large stress drop associated with the initiation of the earthquake. As one progresses from the second to the fifth IMF component, there is a general migration of the source region away from the hypocenter with associated longer-period signals as the rupture propagates. This study suggests that the different IMF components carry information on the earthquake rupture process that is expressed in their different frequency bands.

  16. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  17. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  18. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  19. Earthquake source properties from pseudotachylite

    USGS Publications Warehouse

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow

  20. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    models. We envisage that the ensemble of contributed models will be useful both as a research resource and in the classroom. Locations of earthquakes derived from InSAR data have already been demonstrated to differ significantly from those obtained from global seismic networks (Weston et al., 2011), and the locations obtained by our users will enable us to identify systematic mislocations that are likely due to errors in Earth velocity models used to locate earthquakes. If the tool is incorporated into geophysics, tectonics and/or structural geology classes, in addition to familiarizing students with InSAR and elastic deformation modeling, the spread of different results for each individual earthquake will allow the teaching of concepts such as model uncertainty and non-uniqueness when modeling real scientific data. Additionally, the process students go through to optimize their estimates of fault parameters can easily be tied into teaching about the concepts of forward and inverse problems, which are common in geophysics.

  1. Large Occurrence Patterns of New Zealand Deep Earthquakes: Characterization by Use of a Switching Poisson Model

    NASA Astrophysics Data System (ADS)

    Shaochuan, Lu; Vere-Jones, David

    2011-10-01

    The paper studies the statistical properties of deep earthquakes around North Island, New Zealand. We first evaluate the catalogue coverage and completeness of deep events according to cusum (cumulative sum) statistics and earlier literature. The epicentral, depth, and magnitude distributions of deep earthquakes are then discussed. It is worth noting that strong grouping effects are observed in the epicentral distribution of these deep earthquakes. Also, although the spatial distribution of deep earthquakes does not change, their occurrence frequencies vary from time to time, active in one period, relatively quiescent in another. The depth distribution of deep earthquakes also hardly changes except for events with focal depth less than 100 km. On the basis of spatial concentration we partition deep earthquakes into several groups—the Taupo-Bay of Plenty group, the Taranaki group, and the Cook Strait group. Second-order moment analysis via the two-point correlation function reveals only very small-scale clustering of deep earthquakes, presumably limited to some hot spots only. We also suggest that some models usually used for shallow earthquakes fit deep earthquakes unsatisfactorily. Instead, we propose a switching Poisson model for the occurrence patterns of deep earthquakes. The goodness-of-fit test suggests that the time-varying activity is well characterized by a switching Poisson model. Furthermore, detailed analysis carried out on each deep group by use of switching Poisson models reveals similar time-varying behavior in occurrence frequencies in each group.

  2. Broadband characteristics of earthquakes recorded during a dome-building eruption at Mount St. Helens, Washington, between October 2004 and May 2005: Chapter 5 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Horton, Stephen P.; Norris, Robert D.; Moran, Seth C.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    From October 2004 to May 2005, the Center for Earthquake Research and Information of the University of Memphis operated two to six broadband seismometers within 5 to 20 km of Mount St. Helens to help monitor recent seismic and volcanic activity. Approximately 57,000 earthquakes identified during the 7-month deployment had a normal magnitude distribution with a mean magnitude of 1.78 and a standard deviation of 0.24 magnitude units. Both the mode and range of earthquake magnitude and the rate of activity varied during the deployment. We examined the time domain and spectral characteristics of two classes of events seen during dome building. These include volcano-tectonic earthquakes and lower-frequency events. Lower-frequency events are further classified into hybrid earthquakes, low-frequency earthquakes, and long-duration volcanic tremor. Hybrid and low-frequency earthquakes showed a continuum of characteristics that varied systematically with time. A progressive loss of high-frequency seismic energy occurred in earthquakes as magma approached and eventually reached the surface. The spectral shape of large and small earthquakes occurring within days of each other did not vary with magnitude. Volcanic tremor events and lower-frequency earthquakes displayed consistent spectral peaks, although higher frequencies were more favorably excited during tremor than earthquakes.

  3. Primary Atomic Frequency Standards at NIST

    PubMed Central

    Sullivan, D. B.; Bergquist, J. C.; Bollinger, J. J.; Drullinger, R. E.; Itano, W. M.; Jefferts, S. R.; Lee, W. D.; Meekhof, D.; Parker, T. E.; Walls, F. L.; Wineland, D. J.

    2001-01-01

    The development of atomic frequency standards at NIST is discussed and three of the key frequency-standard technologies of the current era are described. For each of these technologies, the most recent NIST implementation of the particular type of standard is described in greater detail. The best relative standard uncertainty achieved to date for a NIST frequency standard is 1.5×10−15. The uncertainties of the most recent NIST standards are displayed relative to the uncertainties of atomic frequency standards of several other countries. PMID:27500017

  4. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  5. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  6. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  7. Lifestyle factors and social ties associated with the frequency of laughter after the Great East Japan Earthquake: Fukushima Health Management Survey.

    PubMed

    Hirosaki, Mayumi; Ohira, Tetsuya; Yasumura, Seiji; Maeda, Masaharu; Yabe, Hirooki; Harigane, Mayumi; Takahashi, Hideto; Murakami, Michio; Suzuki, Yuriko; Nakano, Hironori; Zhang, Wen; Uemura, Mayu; Abe, Masafumi; Kamiya, Kenji

    2018-03-01

    Although mental health problems such as depression after disasters have been reported, positive psychological factors after disasters have not been examined. Recently, the importance of positive affect to our health has been recognised. We therefore investigated the frequency of laughter and its related factors among residents of evacuation zones after the Great East Japan Earthquake of 2011. In a cross-sectional study on 52,320 participants aged 20 years and older who were included in the Fukushima Health Management Survey in Japan's fiscal year 2012, associations of the frequency of laughter with changes in lifestyle after the disaster, such as a changed work situation, the number of family members, and the number of address changes, and other sociodemographic, psychological, and lifestyle factors were examined using logistic regression analysis. The frequency of laughter was assessed using a single-item question: "How often do you laugh out loud?" The proportion of those who laugh almost every day was 27.1%. Multivariable models adjusted for sociodemographic, psychological, and lifestyle factors demonstrated that an increase in the number of family members and fewer changes of address were significantly associated with a high frequency of laughter. Mental health, regular exercise, and participation in recreational activities were also associated with a high frequency of laughter. Changes in lifestyle factors after the disaster were associated with the frequency of laughter in the evacuation zone. Future longitudinal studies are needed to examine what factors can increase the frequency of laughter.

  8. How did the Canterbury earthquakes affect physiotherapists and physiotherapy services? A qualitative study.

    PubMed

    Mulligan, Hilda; Smith, Catherine M; Ferdinand, Sandy

    2015-03-01

    The recent earthquakes in Canterbury New Zealand ended lives and resulted in disruption to many aspect of life for survivors, including physiotherapists. Physiotherapists often volunteer vital rehabilitation services in the wake of global disasters; however, little is known about how physiotherapists cope with disasters that affect their own communities. The purpose of this study was to investigate how the Canterbury earthquakes affected physiotherapists and physiotherapy services. We use a General Inductive Approach to analyse data obtained from purposively sampled physiotherapists or physiotherapy managers in the Canterbury region. Interviews were audio-recorded and transcribed verbatim. We analysed data from interviews with 27 female and six male participants. We identified four themes: 'A life-changing earthquake' that described how both immediate and on-going events led to our second theme 'Uncertainty'. Uncertainty eroded feelings of resilience, but this was tempered by our third theme 'Giving and receiving support'. Throughout these three themes, we identified a further theme 'Being a physiotherapist'. This theme explains how physiotherapists and physiotherapy services were and still are affected by the Canterbury earthquakes. We recommend that disaster planning occurs at individual, departmental, practice and professional levels. This planning will enable physiotherapists to better cope in the event of a disaster and would help to provide professional bodies with a cohesive set of skills that can be shared with health agencies and rescue organizations. We recommend that the apparently vital skill of listening is explored through further research in order for it to be better accepted as a core physiotherapy skill. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Hydraulic fracturing volume is associated with induced earthquake productivity in the Duvernay play

    NASA Astrophysics Data System (ADS)

    Schultz, R.; Atkinson, G.; Eaton, D. W.; Gu, Y. J.; Kao, H.

    2018-01-01

    A sharp increase in the frequency of earthquakes near Fox Creek, Alberta, began in December 2013 in response to hydraulic fracturing. Using a hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We show that induced earthquakes are associated with completions that used larger injection volumes (104 to 105 cubic meters) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have an insignificant association with seismic response. Further findings suggest that geological factors play a prominent role in seismic productivity, as evidenced by spatial correlations. Together, volume and geological factors account for ~96% of the variability in the induced earthquake rate near Fox Creek. This result is quantified by a seismogenic index–modified frequency-magnitude distribution, providing a framework to forecast induced seismicity.

  10. Understanding Earthquake Hazard & Disaster in Himalaya - A Perspective on Earthquake Forecast in Himalayan Region of South Central Tibet

    NASA Astrophysics Data System (ADS)

    Shanker, D.; Paudyal, ,; Singh, H.

    2010-12-01

    characterized by an extremely high annual earthquake frequency as compared to the preceding normal and the following gap episodes, and is the characteristics of the events in such an episode is causally related with the magnitude and the time of occurrence of the forthcoming earthquake. It is observed here that for the shorter duration of the preparatory time period, there will be the smaller mainshock, and vice-versa. The Western Nepal and the adjoining Tibet region are potential for the future medium size earthquakes. Accordingly, it has been estimated here that an earthquake with M 6.5 ± 0.5 may occur at any time from now onwards till December 2011 in the Western Nepal within an area bounded by 29.3°-30.5° N and 81.2°-81.9° E, in the focal depth range 10 -30 km.

  11. Integration of paleoseismic data from multiple sites to develop an objective earthquake chronology: Application to the Weber segment of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    DuRoss, Christopher B.; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Lund, William R.

    2011-01-01

    We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7–1.9-kyr estimated two-sigma [2δ] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.

  12. Integration of paleoseismic data from multiple sites to develop an objective earthquake chronology: Application to the Weber segment of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    DuRoss, C.B.; Personius, S.F.; Crone, A.J.; Olig, S.S.; Lund, W.R.

    2011-01-01

    We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7-1.9-kyr estimated two-sigma [2??] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.

  13. The history of late holocene surface-faulting earthquakes on the central segments of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Duross, Christopher; Personius, Stephen; Olig, Susan S; Crone, Anthony J.; Hylland, Michael D.; Lund, William R; Schwartz, David P.

    2017-01-01

    The Wasatch fault (WFZ)—Utah’s longest and most active normal fault—forms a prominent eastern boundary to the Basin and Range Province in northern Utah. To provide paleoseismic data for a Wasatch Front regional earthquake forecast, we synthesized paleoseismic data to define the timing and displacements of late Holocene surface-faulting earthquakes on the central five segments of the WFZ. Our analysis yields revised histories of large (M ~7) surface-faulting earthquakes on the segments, as well as estimates of earthquake recurrence and vertical slip rate. We constrain the timing of four to six earthquakes on each of the central segments, which together yields a history of at least 24 surface-faulting earthquakes since ~6 ka. Using earthquake data for each segment, inter-event recurrence intervals range from about 0.6 to 2.5 kyr, and have a mean of 1.2 kyr. Mean recurrence, based on closed seismic intervals, is ~1.1–1.3 kyr per segment, and when combined with mean vertical displacements per segment of 1.7–2.6 m, yield mean vertical slip rates of 1.3–2.0 mm/yr per segment. These data refine the late Holocene behavior of the central WFZ; however, a significant source of uncertainty is whether structural complexities that define the segments of the WFZ act as hard barriers to ruptures propagating along the fault. Thus, we evaluate fault rupture models including both single-segment and multi-segment ruptures, and define 3–17-km-wide spatial uncertainties in the segment boundaries. These alternative rupture models and segment-boundary zones honor the WFZ paleoseismic data, take into account the spatial and temporal limitations of paleoseismic data, and allow for complex ruptures such as partial-segment and spillover ruptures. Our data and analyses improve our understanding of the complexities in normal-faulting earthquake behavior and provide geological inputs for regional earthquake-probability and seismic hazard assessments.

  14. Hydraulic Fracturing Completion Volume is Associated with Induced Earthquake Productivity in the Duvernay Play

    NASA Astrophysics Data System (ADS)

    Schultz, R.; Atkinson, G. M.; Eaton, D. W. S.; Gu, Y. J.; Kao, H.

    2017-12-01

    A sharp increase in the frequency of earthquakes near Fox Creek, Alberta began in December 2013 as a result of hydraulic fracturing completions in the Duvernay Formation. Using a newly compiled hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We find that induced earthquakes are associated with pad completions that used larger injection volumes (104-5 m3) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have limited or insignificant correlation with the seismic response. Further findings suggest that geological susceptibilities play a prominent role in seismic productivity, as evidenced by spatial correlations in the seismicity patterns. Together, volume and geological susceptibilities account for 96% of the variability in the induced earthquake rate near Fox Creek. We suggest this result is fit by a modified Gutenberg-Richter earthquake frequency-magnitude distribution which provides a conceptual framework with which to forecast induced seismicity hazard.

  15. Low-frequency earthquakes reveal punctuated slow slip on the deep extent of the Alpine Fault, New Zealand

    USGS Publications Warehouse

    Chamberlain, Calum J.; Shelly, David R.; Townend, John; Stern, T.A.

    2014-01-01

    We present the first evidence of low-frequency earthquakes (LFEs) associated with the deep extension of the transpressional Alpine Fault beneath the central Southern Alps of New Zealand. Our database comprises a temporally continuous 36 month-long catalog of 8760 LFEs within 14 families. To generate this catalog, we first identify 14 primary template LFEs within known periods of seismic tremor and use these templates to detect similar events in an iterative stacking and cross-correlation routine. The hypocentres of 12 of the 14 LFE families lie within 10 km of the inferred location of the Alpine Fault at depths of approximately 20–30 km, in a zone of high P-wave attenuation, low P-wave speeds, and high seismic reflectivity. The LFE catalog consists of persistent, discrete events punctuated by swarm-like bursts of activity associated with previously and newly identified tremor periods. The magnitudes of the LFEs range between ML – 0.8 and ML 1.8, with an average of ML 0.5. We find that the frequency-magnitude distribution of the LFE catalog both as a whole and within individual families is not consistent with a power law, but that individual families' frequency-amplitude distributions approximate an exponential relationship, suggestive of a characteristic length-scale of failure. We interpret this LFE activity to represent quasi-continuous slip on the deep extent of the Alpine Fault, with LFEs highlighting asperities within an otherwise steadily creeping region of the fault.

  16. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    NASA Astrophysics Data System (ADS)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north

  17. Salient Features of the 2015 Gorkha, Nepal Earthquake in Relation to Earthquake Cycle and Dynamic Rupture Models

    NASA Astrophysics Data System (ADS)

    Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.

    2015-12-01

    Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity

  18. Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.

    2015-12-01

    Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada

  19. Source parameters of the 1999 Osa peninsula (Costa Rica) earthquake sequence from spectral ratios analysis

    NASA Astrophysics Data System (ADS)

    Verdecchia, A.; Harrington, R. M.; Kirkpatrick, J. D.

    2017-12-01

    Many observations suggest that duration and size scale in a self-similar way for most earthquakes. Deviations from the expected scaling would suggest that some physical feature on the fault surface influences the speed of rupture differently at different length scales. Determining whether differences in scaling exist between small and large earthquakes is complicated by the fact that duration estimates of small earthquakes are often distorted by travel-path and site effects. However, when carefully estimated, scaling relationships between earthquakes may provide important clues about fault geometry and the spatial scales over which it affects fault rupture speed. The Mw 6.9, 20 August 1999, Quepos earthquake occurred on the plate boundary thrust fault along southern Costa Rica margin where the subducting seafloor is cut by numerous normal faults. The mainshock and aftershock sequence were recorded by land and (partially by) ocean bottom (OBS) seismic arrays deployed as part of the CRSEIZE experiment. Here we investigate the size-duration scaling of the mainshock and relocated aftershocks on the plate boundary to determine if a change in scaling exists that is consistent with a change in fault surface geometry at a specific length scale. We use waveforms from 5 short-period land stations and 12 broadband OBS stations to estimate corner frequencies (the inverse of duration) and seismic moment for several aftershocks on the plate interface. We first use spectral amplitudes of single events to estimate corner frequencies and seismic moments. We then adopt a spectral ratio method to correct for non-source-related effects and refine the corner frequency estimation. For the spectral ratio approach, we use pairs of earthquakes with similar waveforms (correlation coefficient > 0.7), with waveform similarity implying event co-location. Preliminary results from single spectra show similar corner frequency values among events of 0.5 ≤ M ≤ 3.6, suggesting a decrease in

  20. Evidences of landslide earthquake triggering due to self-excitation process

    NASA Astrophysics Data System (ADS)

    Bozzano, F.; Lenti, L.; Martino, Salvatore; Paciello, A.; Scarascia Mugnozza, G.

    2011-06-01

    The basin-like setting of stiff bedrock combined with pre-existing landslide masses can contribute to seismic amplifications in a wide frequency range (0-10 Hz) and induce a self-excitation process responsible for earthquake-triggered landsliding. Here, the self-excitation process is proposed to justify the far-field seismic trigger of the Cerda landslide (Sicily, Italy) which was reactivated by the 6th September 2002 Palermo earthquake ( M s = 5.4), about 50 km far from the epicentre. The landslide caused damage to farm houses, roads and aqueducts, close to the village of Cerda, and involved about 40 × 106 m3 of clay shales; the first ground cracks due to the landslide movement formed about 30 min after the main shock. A stress-strain dynamic numerical modelling, performed by FDM code FLAC 5.0, supports the notion that the combination of local geological setting and earthquake frequency content played a fundamental role in the landslide reactivation. Since accelerometric records of the triggering event are not available, dynamic equivalent inputs have been used for the numerical modelling. These inputs can be regarded as representative for the local ground shaking, having a PGA value up to 0.2 m/s2, which is the maximum expected in 475 years, according to the Italian seismic hazard maps. A 2D numerical modelling of the seismic wave propagation in the Cerda landslide area was also performed; it pointed out amplification effects due to both the structural setting of the stiff bedrock (at about 1 Hz) and the pre-existing landslide mass (in the range 3-6 Hz). The frequency peaks of the resulting amplification functions ( A( f)) fit well the H/ V spectral ratios from ambient noise and the H/ H spectral ratios to a reference station from earthquake records, obtained by in situ velocimetric measurements. Moreover, the Fourier spectra of earthquake accelerometric records, whose source and magnitude are consistent with the triggering event, show a main peak at about 1 Hz

  1. The free oscillations of the earth excited by three strongest earthquakes of the past decade according to deformation observations

    NASA Astrophysics Data System (ADS)

    Milyukov, V. K.; Vinogradov, M. P.; Mironov, A. P.; Myasnikov, A. V.; Perelygin, N. A.

    2015-03-01

    Based on the deformation data provided by the Baksan laser interferometer-strainmeter measurements, the free oscillations of the Earth (FOE) excited by the three strongest earthquakes of the past decade are analyzed. These seismic events include the Great Sumatra-Andaman earthquake that occurred in 2004 in the Indian Ocean, the Mauli earthquake of 2010 in Chile, and the Great Tohoku earthquake of March 2011 in Japan. The frequency-time structure of the free oscillations is studied, and the pattern of interaction between the modes with close frequencies (cross-coupling effect) is explored. For each earthquake, the correspondence of the observed FOE modes to the model predictions by the PREM model is investigated. A reliable consistent shift towards the high frequency of the toroidal modes with angular degree l = 12-19 is revealed. The maximal energy density of the toroidal oscillations is concentrated in the upper mantle of the Earth. Therefore, the established effect corresponds to the higher velocity of the shear waves in the upper mantle than it is predicted by the PREM model.

  2. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  3. Comparison of seismic waveform inversion results for the rupture history of a finite fault: application to the 1986 North Palm Springs, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.

    1989-01-01

    The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author

  4. Central US earthquake catalog for hazard maps of Memphis, Tennessee

    USGS Publications Warehouse

    Wheeler, R.L.; Mueller, C.S.

    2001-01-01

    An updated version of the catalog that was used for the current national probabilistic seismic-hazard maps would suffice for production of large-scale hazard maps of the Memphis urban area. Deaggregation maps provide guidance as to the area that a catalog for calculating Memphis hazard should cover. For the future, the Nuttli and local network catalogs could be examined for earthquakes not presently included in the catalog. Additional work on aftershock removal might reduce hazard uncertainty. Graphs of decadal and annual earthquake rates suggest completeness at and above magnitude 3 for the last three or four decades. Any additional work on completeness should consider the effects of rapid, local population changes during the Nation's westward expansion. ?? 2001 Elsevier Science B.V. All rights reserved.

  5. Geomorphic legacy of medieval Himalayan earthquakes in the Pokhara Valley

    NASA Astrophysics Data System (ADS)

    Schwanghart, Wolfgang; Bernhardt, Anne; Stolle, Amelie; Hoelzmann, Philipp; Adhikari, Basanta R.; Andermann, Christoff; Tofelde, Stefanie; Merchel, Silke; Rugel, Georg; Fort, Monique; Korup, Oliver

    2016-04-01

    The Himalayas and their foreland belong to the world's most earthquake-prone regions. With millions of people at risk from severe ground shaking and associated damages, reliable data on the spatial and temporal occurrence of past major earthquakes is urgently needed to inform seismic risk analysis. Beyond the instrumental record such information has been largely based on historical accounts and trench studies. Written records provide evidence for damages and fatalities, yet are difficult to interpret when derived from the far-field. Trench studies, in turn, offer information on rupture histories, lengths and displacements along faults but involve high chronological uncertainties and fail to record earthquakes that do not rupture the surface. Thus, additional and independent information is required for developing reliable earthquake histories. Here, we present exceptionally well-dated evidence of catastrophic valley infill in the Pokhara Valley, Nepal. Bayesian calibration of radiocarbon dates from peat beds, plant macrofossils, and humic silts in fine-grained tributary sediments yields a robust age distribution that matches the timing of nearby M>8 earthquakes in ~1100, 1255, and 1344 AD. The upstream dip of tributary valley fills and X-ray fluorescence spectrometry of their provenance rule out local sediment sources. Instead, geomorphic and sedimentary evidence is consistent with catastrophic fluvial aggradation and debris flows that had plugged several tributaries with tens of meters of calcareous sediment from the Annapurna Massif >60 km away. The landscape-changing consequences of past large Himalayan earthquakes have so far been elusive. Catastrophic aggradation in the wake of two historically documented medieval earthquakes and one inferred from trench studies underscores that Himalayan valley fills should be considered as potential archives of past earthquakes. Such valley fills are pervasive in the Lesser Himalaya though high erosion rates reduce

  6. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  7. Seismic density and its relationship with strong historical earthquakes around Beijing, China

    NASA Astrophysics Data System (ADS)

    WANG, J.

    2012-12-01

    As you know, Beijing is the capital of China. The regional earthquake observation networks have been built around Beijing (115.0°-119.3°E, 38.5°-41.0°N) since 1966. From 1970 to 2009, total 20281 earthquakes were recorded. The accumulation of these data raised a fundamental question: what are the characteristics and the physical nature of small earthquakes? In order to answer such question, we must use a quantitative method to deal with seismic pattern. Here we introduce a new concept of seismic density. The method emphasize that we must pay attention to the accuracy of the epicentre location, but no correction is made for the focal depth, because in any case this uncertainty is in any case greater than that of the epicenter. On the basis of these instrumental data, seismic patterns were calculated. The results illustrate that seismic density is the main character of the seismic pattern. Temporal distribution of small earthquakes in each seismic density zone is analyzed quantitatively. According to the statistics, mainly two types of seismic density are distinguished. Besides of the instrumental data, abundant information of historical earthquakes around Beijing is found in the archives, total 15 strong historical earthquake (M>=6). The earliest one occurred in September 294. After comparing, a very interesting phenomenon was noticed that the epicenters of strong historical earthquakes with high accuracy location corresponding with one of the seismic density type, which temporal distribution is almost stationary. This correspondent means small earthquakes still cluster near the epicenters of historical earthquakes, even if those occurred several hundred years ago. The mechanics of the relationship is analyzed. Strong historical earthquakes and seismic density of small earthquakes are consistent in each case, which reveals the persistent weakness of local crustal medium together. We utilized this relationship to improve the strong historical earthquake locations

  8. Continuous estimates on the earthquake early warning magnitude by use of the near-field acceleration records

    NASA Astrophysics Data System (ADS)

    Li, Jun; Jin, Xing; Wei, Yongxiang; Zhang, Hongcai

    2013-10-01

    In this article, the seismic records of Japan's Kik-net are selected to measure the acceleration, displacement, and effective peak acceleration of each seismic record within a certain time after P wave, then a continuous estimation is given on earthquake early warning magnitude through statistical analysis method, and Wenchuan earthquake record is utilized to check the method. The results show that the reliability of earthquake early warning magnitude continuously increases with the increase of the seismic information, the biggest residual happens if the acceleration is adopted to fit earthquake magnitude, which may be caused by rich high-frequency components and large dispersion of peak value in acceleration record, the influence caused by the high-frequency components can be effectively reduced if the effective peak acceleration and peak displacement is adopted, it is estimated that the dispersion of earthquake magnitude obviously reduces, but it is easy for peak displacement to be affected by long-period drifting. In various components, the residual enlargement phenomenon at vertical direction is almost unobvious, thus it is recommended in this article that the effective peak acceleration at vertical direction is preferred to estimate earthquake early warning magnitude. Through adopting Wenchuan strong earthquake record to check the method mentioned in this article, it is found that this method can be used to quickly, stably, and accurately estimate the early warning magnitude of this earthquake, which shows that this method is completely applicable for earthquake early warning.

  9. Probabilities of Earthquake Occurrences along the Sumatra-Andaman Subduction Zone

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi

    2017-03-01

    Earthquake activities along the Sumatra-Andaman Subduction Zone (SASZ) were clarified using the derived frequency-magnitude distribution in terms of the (i) most probable maximum magnitudes, (ii) return periods and (iii) probabilities of earthquake occurrences. The northern segment of SASZ, along the western coast of Myanmar to southern Nicobar, was found to be capable of generating an earthquake of magnitude 6.1-6.4 Mw in the next 30-50 years, whilst the southern segment of offshore of the northwestern and western parts of Sumatra (defined as a high hazard region) had a short recurrence interval of 6-12 and 10-30 years for a 6.0 and 7.0 Mw magnitude earthquake, respectively, compared to the other regions. Throughout the area along the SASZ, there are 70- almost 100% probabilities of the earthquake with Mw up to 6.0 might be generated in the next 50 years whilst the northern segment had less than 50% chance of occurrence of a 7.0 Mw earthquake in the next 50 year. Although Rangoon was defined as the lowest hazard among the major city in the vicinity of SASZ, there is 90% chance of a 6.0 Mw earthquake in the next 50 years. Therefore, the effective mitigation plan of seismic hazard should be contributed.

  10. Spatial variations in the frequency-magnitude distribution of earthquakes at Soufriere Hills Volcano, Montserrat, West Indies

    USGS Publications Warehouse

    Power, J.A.; Wyss, M.; Latchman, J.L.

    1998-01-01

    The frequency-magnitude distribution of earthquakes measured by the b-value is determined as a function of space beneath Soufriere Hills Volcano, Montserrat, from data recorded between August 1, 1995 and March 31, 1996. A volume of anomalously high b-values (b > 3.0) with a 1.5 km radius is imaged at depths of 0 and 1.5 km beneath English's Crater and Chance's Peak. This high b-value anomaly extends southwest to Gage's Soufriere. At depths greater than 2.5 km volumes of comparatively low b-values (b-1) are found beneath St. George's Hill, Windy Hill, and below 2.5 km depth and to the south of English's Crater. We speculate the depth of high b-value anomalies under volcanoes may be a function of silica content, modified by some additional factors, with the most siliceous having these volumes that are highly fractured or contain high pore pressure at the shallowest depths. Copyright 1998 by the American Geophysical Union.

  11. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  12. Debris flow susceptibility assessment after the 2008 Wenchuan earthquake

    NASA Astrophysics Data System (ADS)

    Fan, Xuanmei; van Westen, Cees; Tang, Chenxiao; Tang, Chuan

    2014-05-01

    Due to a tremendous amount of loose material from landslides that occurred during the Wenchuan earthquake, the frequency and magnitude of debris flows have been immensely increased, causing many casualties and economic losses. This study attempts to assess the post-earthquake debris flow susceptibility based on catchment units in the Wenchuan county, one of the most severely damaged county by the earthquake. The post earthquake debris flow inventory was created by RS image interpretation and field survey. According to our knowledge to the field, several relevant factors were determined as indicators for post-earthquake debris flow occurrence, including the distance to fault surface rupture, peak ground acceleration (PGA), coseismic landslide density, rainfall data, internal relief, slope, drainage density, stream steepness index, existing mitigation works etc. These indicators were then used as inputs in a heuristic model that was developed by adapting the Spatial Multi Criteria Evaluation (SMCE) method. The relative importance of the indicators was evaluated according to their contributions to the debris flow events that have occurred after the earthquake. The ultimate goal of this study is to estimate the relative likelihood of debris flow occurrence in each catchment, and use this result together with elements at risk and vulnerability information to assess the changing risk of the most susceptible catchment.

  13. Self-organized criticality in complex systems: Applicability to the interoccurrent and recurrent statistical behavior of earthquakes

    NASA Astrophysics Data System (ADS)

    Abaimov, Sergey G.

    The concept of self-organized criticality is associated with scale-invariant, fractal behavior; this concept is also applicable to earthquake systems. It is known that the interoccurrent frequency-size distribution of earthquakes in a region is scale-invariant and obeys the Gutenberg-Richter power-law dependence. Also, the interoccurrent time-interval distribution is known to obey Poissonian statistics excluding aftershocks. However, to estimate the hazard risk for a region it is necessary to know also the recurrent behavior of earthquakes at a given point on a fault. This behavior has been investigated in the literature, however, major questions remain unresolved. The reason is the small number of earthquakes in observed sequences. To overcome this difficulty this research utilizes numerical simulations of a slider-block model and a sand-pile model. Also, experimental observations of creep events on the creeping section of the San Andreas fault are processed and sequences up to 100 events are studied. Then the recurrent behavior of earthquakes at a given point on a fault or at a given fault is investigated. It is shown that both the recurrent frequency-size and the time-interval behaviors of earthquakes obey the Weibull distribution.

  14. Ground Motion Characteristics of Induced Earthquakes in Central North America

    NASA Astrophysics Data System (ADS)

    Atkinson, G. M.; Assatourians, K.; Novakovic, M.

    2017-12-01

    The ground motion characteristics of induced earthquakes in central North America are investigated based on empirical analysis of a compiled database of 4,000,000 digital ground-motion records from events in induced-seismicity regions (especially Oklahoma). Ground-motion amplitudes are characterized non-parametrically by computing median amplitudes and their variability in magnitude-distance bins. We also use inversion techniques to solve for regional source, attenuation and site response effects. Ground motion models are used to interpret the observations and compare the source and attenuation attributes of induced earthquakes to those of their natural counterparts. Significant conclusions are that the stress parameter that controls the strength of high-frequency radiation is similar for induced earthquakes (depth of h 5 km) and shallow (h 5 km) natural earthquakes. By contrast, deeper natural earthquakes (h 10 km) have stronger high-frequency ground motions. At distances close to the epicenter, a greater focal depth (which increases distance from the hypocenter) counterbalances the effects of a larger stress parameter, resulting in motions of similar strength close to the epicenter, regardless of event depth. The felt effects of induced versus natural earthquakes are also investigated using USGS "Did You Feel It?" reports; 400,000 reports from natural events and 100,000 reports from induced events are considered. The felt reports confirm the trends that we expect based on ground-motion modeling, considering the offsetting effects of the stress parameter versus focal depth in controlling the strength of motions near the epicenter. Specifically, felt intensity for a given magnitude is similar near the epicenter, on average, for all event types and depths. At distances more than 10 km from the epicenter, deeper events are felt more strongly than shallow events. These ground-motion attributes imply that the induced-seismicity hazard is most critical for facilities in

  15. 25 April 2015 Gorkha Earthquake in Nepal Himalaya (Part 2)

    NASA Astrophysics Data System (ADS)

    Rao, N. Purnachandra; Burgmann, Roland; Mugnier, Jean-Louis; Gahalaut, Vineet; Pandey, Anand

    2017-06-01

    The response from the geosciences community working on Himalaya in general, and the 2015 Nepal earthquakes in specific, was overwhelming, and after a rigorous review process, thirteen papers were selected and published in Part-1. We are still left with a few good papers which are being brought out as Part-2 of the special issue. In the opening article Jean-Louis Mugnier and colleagues attempt to provide a structural geological perspective of the 25 April 2015 Gorkha earthquake and highlight the role of segmentation in generating the Himalayan mega-thrusts. They could infer segmentation by stable barriers in the HT that define barrier-type earthquake families. In yet another interesting piece of work, Pandey and colleagues map the crustal structure across the earthquake volume using Receiver function approach and infer a 5-km thick low velocity layer that connects to the MHT ramp. They are also able to correlate the rupture termination with the highest point of coseismic uplift. The last paper by Shen et al. highlights the usefulness of INSAR technique in mapping the coseismic slip distribution applied to the 25 April 2015 Gorkha earthquake. They infer low stress drop and corner frequency which coupled with hybrid modeling explain the low level of slip heterogeneity and frequency of ground motion. We compliment the journal of Asian Earth Sciences for bringing out the two volumes and do hope that these efforts have made a distinct impact on furthering our understanding of seismogenesis in Himalaya using the very latest data sets.

  16. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  17. Assessing Lay Understanding of Common Presentations of Earthquake Hazard Information

    NASA Astrophysics Data System (ADS)

    Thompson, K. J.; Krantz, D. H.

    2010-12-01

    Theory: An Analysis of Decision under Risk. Econometrica, XLVII: 263-291. [2] Fischhoff, B, Slovic, P, Lichtenstein, S, Read, S & Combs, B (1978). How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Pol Sci, 9, 127-152. [3] http://www.scec.org/ucerf/ [4] Hau, R, Pleskac, TJ, Kiefer, J & Hertwig, R (2008). The Description-Experience Gap in Risky Choice: The Role of Sample Size and Experienced Probabilities. J Behav Decis Making, 21: 493-518. [5] Lichtenstein, S, Slovic, P, Fischhoff, B, Layman, M & Combs, B (1978). Judged frequency of lethal events. J Exp Psy: Human Learning and Memory, 4, 551-578. [6] Hertwig, R, Barron, G, Weber, EU & Erev, I (2006). The role of information sampling in risky choice. In K Fiedler & P Juslin (Eds), Information sampling and adaptive cognition. Pp 75-91. New York: Cambridge University Press. [7] Budescu, DV, Broomell, S & Por HH (2009). Improving communication of uncertainty in the reports of the intergovernmental panel on climate change. Psychol Sci, 20(3), 299-308.

  18. Focal mechanisms and inter-event times of low-frequency earthquakes reveal quasi-continuous deformation and triggered slow slip on the deep Alpine Fault

    NASA Astrophysics Data System (ADS)

    Baratin, Laura-May; Chamberlain, Calum J.; Townend, John; Savage, Martha K.

    2018-02-01

    Characterising the seismicity associated with slow deformation in the vicinity of the Alpine Fault may provide constraints on the stresses acting on a major transpressive margin prior to an anticipated great (≥M8) earthquake. Here, we use recently detected tremor and low-frequency earthquakes (LFEs) to examine how slow tectonic deformation is loading the Alpine Fault late in its typical ∼300-yr seismic cycle. We analyse a continuous seismic dataset recorded between 2009 and 2016 using a network of 10-13 short-period seismometers, the Southern Alps Microearthquake Borehole Array. Fourteen primary LFE templates are used in an iterative matched-filter and stacking routine, allowing the detection of similar signals corresponding to LFE families sharing common locations. This yields an 8-yr catalogue containing 10,000 LFEs that are combined for each of the 14 LFE families using phase-weighted stacking to produce signals with the highest possible signal-to-noise ratios. We show that LFEs occur almost continuously during the 8-yr study period and highlight two types of LFE distributions: (1) discrete behaviour with an inter-event time exceeding 2 min; (2) burst-like behaviour with an inter-event time below 2 min. We interpret the discrete events as small-scale frequent deformation on the deep extent of the Alpine Fault and LFE bursts (corresponding in most cases to known episodes of tremor or large regional earthquakes) as brief periods of increased slip activity indicative of slow slip. We compute improved non-linear earthquake locations using a 3-D velocity model. LFEs occur below the seismogenic zone at depths of 17-42 km, on or near the hypothesised deep extent of the Alpine Fault. The first estimates of LFE focal mechanisms associated with continental faulting, in conjunction with recurrence intervals, are consistent with quasi-continuous shear faulting on the deep extent of the Alpine Fault.

  19. Unusual Childhood Waking as a Possible Precursor of the 1995 Kobe Earthquake

    PubMed Central

    Ikeya, Motoji; Whitehead, Neil E.

    2013-01-01

    Simple Summary The paper investigates whether young children may waken before earthquakes through a cause other than foreshocks. It concludes there is statistical evidence for this, but the mechanism best supported is anxiety produced by Ultra Low Frequency (ULF) electromagnetic waves. Abstract Nearly 1,100 young students living in Japan at a range of distances up to 500 km from the 1995 Kobe M7 earthquake were interviewed. A statistically significant abnormal rate of early wakening before the earthquake was found, having exponential decrease with distance and a half value approaching 100 km, but decreasing much slower than from a point source such as an epicentre; instead originating from an extended area of more than 100 km in diameter. Because an improbably high amount of variance is explained, this effect is unlikely to be simply psychological and must reflect another mechanism—perhaps Ultra-Low Frequency (ULF) electromagnetic waves creating anxiety—but probably not 222Rn excess. Other work reviewed suggests these conclusions may be valid for animals in general, not just children, but would be very difficult to apply for practical earthquake prediction. PMID:26487316

  20. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  1. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance uncertainty of ???25??. This observed similarity implies that in small volumes of crust, while faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  2. Bayesian estimation of source parameters and associated Coulomb failure stress changes for the 2005 Fukuoka (Japan) Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes

    2018-04-01

    Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.

  3. Mechanical and Statistical Evidence of Human-Caused Earthquakes - A Global Data Analysis

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2012-12-01

    The causality of large-scale geoengineering activities and the occurrence of earthquakes with magnitudes of up to M=8 is discussed and mechanical and statistical evidence is provided. The earthquakes were caused by artificial water reservoir impoundments, underground and open-pit mining, coastal management, hydrocarbon production and fluid injections/extractions. The presented global earthquake catalog has been recently published in the Journal of Seismology and is available for the public at www.cdklose.com. The data show evidence that geomechanical relationships exist with statistical significance between a) seismic moment magnitudes of observed earthquakes, b) anthropogenic mass shifts on the Earth's crust, and c) lateral distances of the earthquake hypocenters to the locations of the mass shifts. Research findings depend on uncertainties, in particular, of source parameter estimations of seismic events before instrumental recoding. First analyses, however, indicate that that small- to medium size earthquakes (M6) tend to be triggered. The rupture propagation of triggered events might be dominated by pre-existing tectonic stress conditions. Besides event specific evidence, large earthquakes such as China's 2008 M7.9 Wenchuan earthquake fall into a global pattern and can not be considered as outliers or simply seen as an act of god. Observations also indicate that every second seismic event tends to occur after a decade, while pore pressure diffusion seems to only play a role when injecting fluids deep underground. The chance of an earthquake to nucleate after two or 20 years near an area with a significant mass shift is 25% or 75% respectively. Moreover, causative effects of seismic activities highly depend on the tectonic stress regime in the Earth's crust in which geoengineering takes place.

  4. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake.

    PubMed

    Kato, Aitaro; Fukuda, Jun'ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-04-25

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase.

  5. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake

    NASA Astrophysics Data System (ADS)

    Kato, Aitaro; Fukuda, Jun'Ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-04-01

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase.

  6. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake

    PubMed Central

    Kato, Aitaro; Fukuda, Jun’ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-01-01

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase. PMID:27109362

  7. Frequency domain analysis of errors in cross-correlations of ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Ben-Zion, Yehuda; Zigone, Dimitri

    2016-12-01

    We analyse random errors (variances) in cross-correlations of ambient seismic noise in the frequency domain, which differ from previous time domain methods. Extending previous theoretical results on ensemble averaged cross-spectrum, we estimate confidence interval of stacked cross-spectrum of finite amount of data at each frequency using non-overlapping windows with fixed length. The extended theory also connects amplitude and phase variances with the variance of each complex spectrum value. Analysis of synthetic stationary ambient noise is used to estimate the confidence interval of stacked cross-spectrum obtained with different length of noise data corresponding to different number of evenly spaced windows of the same duration. This method allows estimating Signal/Noise Ratio (SNR) of noise cross-correlation in the frequency domain, without specifying filter bandwidth or signal/noise windows that are needed for time domain SNR estimations. Based on synthetic ambient noise data, we also compare the probability distributions, causal part amplitude and SNR of stacked cross-spectrum function using one-bit normalization or pre-whitening with those obtained without these pre-processing steps. Natural continuous noise records contain both ambient noise and small earthquakes that are inseparable from the noise with the existing pre-processing steps. Using probability distributions of random cross-spectrum values based on the theoretical results provides an effective way to exclude such small earthquakes, and additional data segments (outliers) contaminated by signals of different statistics (e.g. rain, cultural noise), from continuous noise waveforms. This technique is applied to constrain values and uncertainties of amplitude and phase velocity of stacked noise cross-spectrum at different frequencies, using data from southern California at both regional scale (˜35 km) and dense linear array (˜20 m) across the plate-boundary faults. A block bootstrap resampling method

  8. 3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty

    USGS Publications Warehouse

    Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.

    2017-01-01

    Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in

  9. Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence

    USGS Publications Warehouse

    Mori, J.; Kanamori, H.

    1996-01-01

    Close examination of the P waves from earthquakes ranging in size across several orders of magnitude shows that the shape of the initiation of the velocity waveforms is independent of the magnitude of the earthquake. A model in which earthquakes of all sizes have similar rupture initiation can explain the data. This suggests that it is difficult to estimate the eventual size of an earthquake from the initial portion of the waveform. Previously reported curvature seen in the beginning of some velocity waveforms can be largely explained as the effect of anelastic attenuation; thus there is little evidence for a departure from models of simple rupture initiation that grow dynamically from a small region. The results of this study indicate that any "precursory" radiation at seismic frequencies must emanate from a source region no larger than the equivalent of a M0.5 event (i.e. a characteristic length of ???10 m). The size of the nucleation region for magnitude 0 to 5 earthquakes thus is not resolvable with the standard seismic instrumentation deployed in California. Copyright 1996 by the American Geophysical Union.

  10. Transient triggering of near and distant earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Blanpied, M.L.; Beeler, N.M.

    1997-01-01

    We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i

  11. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  12. Stress Drop and Its Relationship to Radiated Energy, Ground Motion and Uncertainty

    NASA Astrophysics Data System (ADS)

    Baltay, A.

    2014-12-01

    Despite the seemingly diverse circumstances under which crustal earthquakes occur, scale-invariant stress drop and apparent stress, the ratio of radiated seismic energy to moment, is observed. The magnitude-independence of these parameters is central to our understanding of both earthquake physics and strong ground motion genesis. Estimates of stress drop and radiated energy, however, display large amounts of scatter potentially masking any secondary trends in the data. We investigate sources of this uncertainty within the framework of constant stress drop and apparent stress. We first re-visit estimates of energy and stress drop from a variety of earthquake observations and methods, for events ranging from magnitude ~2 to ~9. Using an empirical Green's function (eGf) deconvolution method, which removes the path and site effects, radiated energy and Brune stress drop are estimated for both regional events in the western US and Eastern Honshu, Japan from the HiNet network, as well as teleseismically recorded global great earthquakes [Baltay et al., 2010, 2011, 2014]. In addition to eGf methods, ground-motion based metrics for stress drop are considered, using both KikNet data from Japan [Baltay et al., 2013] and the NGA-West2 data, a very well curated ground-motion database. Both the eGf-based stress drop estimates and those from the NGA-West2 database show a marked decrease in scatter, allowing us to identify deterministic secondary trends in stress drop. We find both an increasing stress drop with depth, as well as a larger stress drop of about 30% on average for mainshock events as compared to on-fault aftershocks. While both of these effects are already included in some ground-motion prediction equations (GMPE), many previous seismological studies have been unable to conclusively uncover these trends because of their considerable scatter. Elucidating these effects in the context of reduced and quantified epistemic uncertainty can help both seismologists and

  13. "Storms of crustal stress" and AE earthquake precursors

    NASA Astrophysics Data System (ADS)

    Gregori, G. P.; Poscolieri, M.; Paparo, G.; de Simone, S.; Rafanelli, C.; Ventrice, G.

    2010-02-01

    Acoustic emission (AE) displays violent paroxysms preceding strong earthquakes, observed within some large area (several hundred kilometres wide) around the epicentre. We call them "storms of crustal stress" or, briefly "crustal storms". A few case histories are discussed, all dealing with the Italian peninsula, and with the different behaviour shown by the AE records in the Cephalonia island (Greece), which is characterized by a different tectonic setting. AE is an effective tool for diagnosing the state of some wide slab of the Earth's crust, and for monitoring its evolution, by means of AE of different frequencies. The same effect ought to be detected being time-delayed, when referring to progressively lower frequencies. This results to be an effective check for validating the physical interpretation. Unlike a seismic event, which involves a much limited focal volume and therefore affects a restricted area on the Earth's surface, a "crustal storm" typically involves some large slab of lithosphere and crust. In general, it cannot be easily reckoned to any specific seismic event. An earthquake responds to strictly local rheological features of the crust, which are eventually activated, and become crucial, on the occasion of a "crustal storm". A "crustal storm" lasts typically few years, eventually involving several destructive earthquakes that hit at different times, at different sites, within that given lithospheric slab. Concerning the case histories that are here discussed, the lithospheric slab is identified with the Italian peninsula. During 1996-1997 a "crustal storm" was on, maybe elapsing until 2002 (we lack information for the period 1998-2001). Then, a quiet period occurred from 2002 until 26 May 2008, when a new "crustal storm" started, and by the end of 2009 it is still on. During the 1996-1997 "storm" two strong earthquakes occurred (Potenza and Colfiorito) - and (maybe) in 2002 also the Molise earthquake can be reckoned to this "storm". During the

  14. The 17 July 2006 Tsunami earthquake in West Java, Indonesia

    USGS Publications Warehouse

    Mori, J.; Mooney, W.D.; Afnimar,; Kurniawan, S.; Anaya, A.I.; Widiyantoro, S.

    2007-01-01

    A tsunami earthquake (Mw = 7.7) occurred south of Java on 17 July 2006. The event produced relatively low levels of high-frequency radiation, and local felt reports indicated only weak shaking in Java. There was no ground motion damage from the earthquake, but there was extensive damage and loss of life from the tsunami along 250 km of the southern coasts of West Java and Central Java. An inspection of the area a few days after the earthquake showed extensive damage to wooden and unreinforced masonry buildings that were located within several hundred meters of the coast. Since there was no tsunami warning system in place, efforts to escape the large waves depended on how people reacted to the earthquake shaking, which was only weakly felt in the coastal areas. This experience emphasizes the need for adequate tsunami warning systems for the Indian Ocean region.

  15. High-resolution earthquake relocation in the Fort Worth and Permian Basins using regional seismic stations

    NASA Astrophysics Data System (ADS)

    Ogwari, P.; DeShon, H. R.; Hornbach, M.

    2017-12-01

    Post-2008 earthquake rate increases in the Central United States have been associated with large-scale subsurface disposal of waste-fluids from oil and gas operations. The beginning of various earthquake sequences in Fort Worth and Permian basins have occurred in the absence of seismic stations at local distances to record and accurately locate hypocenters. Most typically, the initial earthquakes have been located using regional seismic network stations (>100km epicentral distance) and using global 1D velocity models, which usually results in large location uncertainty, especially in depth, does not resolve magnitude <2.5 events, and does not constrain the geometry of the activated fault(s). Here, we present a method to better resolve earthquake occurrence and location using matched filters and regional relative location when local data becomes available. We use the local distance data for high-resolution earthquake location, identifying earthquake templates and accurate source-station raypath velocities for the Pg and Lg phases at regional stations. A matched-filter analysis is then applied to seismograms recorded at US network stations and at adopted TA stations that record the earthquakes before and during the local network deployment period. Positive detections are declared based on manual review of associated with P and S arrivals on local stations. We apply hierarchical clustering to distinguish earthquakes that are both spatially clustered and spatially separated. Finally, we conduct relative earthquake and earthquake cluster location using regional station differential times. Initial analysis applied to the 2008-2009 DFW airport sequence in north Texas results in time continuous imaging of epicenters extending into 2014. Seventeen earthquakes in the USGS earthquake catalog scattered across a 10km2 area near DFW airport are relocated onto a single fault using these approaches. These techniques will also be applied toward imaging recent earthquakes in the

  16. Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference

    USGS Publications Warehouse

    Wesson, R.L.; Bakun, W.H.; Perkins, D.M.

    2003-01-01

    Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.

  17. Calculation of Confidence Intervals for the Maximum Magnitude of Earthquakes in Different Seismotectonic Zones of Iran

    NASA Astrophysics Data System (ADS)

    Salamat, Mona; Zare, Mehdi; Holschneider, Matthias; Zöller, Gert

    2017-03-01

    The problem of estimating the maximum possible earthquake magnitude m_max has attracted growing attention in recent years. Due to sparse data, the role of uncertainties becomes crucial. In this work, we determine the uncertainties related to the maximum magnitude in terms of confidence intervals. Using an earthquake catalog of Iran, m_max is estimated for different predefined levels of confidence in six seismotectonic zones. Assuming the doubly truncated Gutenberg-Richter distribution as a statistical model for earthquake magnitudes, confidence intervals for the maximum possible magnitude of earthquakes are calculated in each zone. While the lower limit of the confidence interval is the magnitude of the maximum observed event,the upper limit is calculated from the catalog and the statistical model. For this aim, we use the original catalog which no declustering methods applied on as well as a declustered version of the catalog. Based on the study by Holschneider et al. (Bull Seismol Soc Am 101(4):1649-1659, 2011), the confidence interval for m_max is frequently unbounded, especially if high levels of confidence are required. In this case, no information is gained from the data. Therefore, we elaborate for which settings finite confidence levels are obtained. In this work, Iran is divided into six seismotectonic zones, namely Alborz, Azerbaijan, Zagros, Makran, Kopet Dagh, Central Iran. Although calculations of the confidence interval in Central Iran and Zagros seismotectonic zones are relatively acceptable for meaningful levels of confidence, results in Kopet Dagh, Alborz, Azerbaijan and Makran are not that much promising. The results indicate that estimating m_max from an earthquake catalog for reasonable levels of confidence alone is almost impossible.

  18. Remotely Triggered Earthquakes Recorded by EarthScope's Transportable Array and Regional Seismic Networks: A Case Study Of Four Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Cerda, I.; Linville, L.; Kilb, D. L.; Pankow, K. L.

    2013-05-01

    Changes in field stress required to trigger earthquakes have been classified in two basic ways: static and dynamic triggering. Static triggering occurs when an earthquake that releases accumulated strain along a fault stress loads a nearby fault. Dynamic triggering occurs when an earthquake is induced by the passing of seismic waves from a large mainshock located at least two or more fault lengths from the epicenter of the main shock. We investigate details of dynamic triggering using data collected from EarthScope's USArray and regional seismic networks located in the United States. Triggered events are identified using an optimized automated detector based on the ratio of short term to long term average (Antelope software). Following the automated processing, the flagged waveforms are individually analyzed, in both the time and frequency domains, to determine if the increased detection rates correspond to local earthquakes (i.e., potentially remotely triggered aftershocks). Here, we show results using this automated schema applied to data from four large, but characteristically different, earthquakes -- Chile (Mw 8.8 2010), Tokoku-Oki (Mw 9.0 2011), Baja California (Mw 7.2 2010) and Wells Nevada (Mw 6.0 2008). For each of our four mainshocks, the number of detections within the 10 hour time windows span a large range (1 to over 200) and statistically >20% of the waveforms show evidence of anomalous signals following the mainshock. The results will help provide for a better understanding of the physical mechanisms involved in dynamic earthquake triggering and will help identify zones in the continental U.S. that may be more susceptible to dynamic earthquake triggering.

  19. Volcanotectonic earthquakes induced by propagating dikes

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2016-04-01

    Volcanotectonic earthquakes are of high frequency and mostly generated by slip on faults. During chamber expansion/contraction earthquakes are distribution in the chamber roof. Following magma-chamber rupture and dike injection, however, earthquakes tend to concentrate around the dike and follow its propagation path, resulting in an earthquake swarm characterised by a number of earthquakes of similar magnitudes. I distinguish between two basic processes by which propagating dikes induce earthquakes. One is due to stress concentration in the process zone at the tip of the dike, the other relates to stresses induced in the walls and surrounding rocks on either side of the dike. As to the first process, some earthquakes generated at the dike tip are related to pure extension fracturing as the tip advances and the dike-path forms. Formation of pure extension fractures normally induces non-double couple earthquakes. There is also shear fracturing in the process zone, however, particularly normal faulting, which produces double-couple earthquakes. The second process relates primarily to slip on existing fractures in the host rock induced by the driving pressure of the propagating dike. Such pressures easily reach 5-20 MPa and induce compressive and shear stresses in the adjacent host rock, which already contains numerous fractures (mainly joints) of different attitudes. In piles of lava flows or sedimentary beds the original joints are primarily vertical and horizontal. Similarly, the contacts between the layers/beds are originally horizontal. As the layers/beds become buried, the joints and contacts become gradually tilted so that the joints and contacts become oblique to the horizontal compressive stress induced by a driving pressure of the (vertical) dike. Also, most of the hexagonal (or pentagonal) columnar joints in the lava flows are, from the beginning, oblique to an intrusive sheet of any attitude. Consequently, the joints and contacts function as potential shear

  20. A versatile microprocessor-controlled hybrid receiver. [with firmware control for operation over large frequency uncertainty

    NASA Technical Reports Server (NTRS)

    Grant, T. L.

    1978-01-01

    A hybrid receiver has been designed for the Galileo Project. The receiver, located on the Galileo Orbiter, will autonomously acquire and track signals from the first atmospheric probe of Jupiter as well as demodulate, bit-synchronize, and buffer the telemetry data. The receiver has a conventional RF and LF front end but performs multiple functions digitally under firmware control. It will be a self-acquiring receiver that operates under a large frequency uncertainty; it can accommodate different modulation types, bit rates, and other parameter changes via reprogramming. A breadboard receiver and test set demonstrate a preliminary version of the sequential detection process and verify the hypothesis that a fading channel does not reduce the probability of detection.

  1. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  2. A Moment Rate Function Deduced from Peak Ground Motions from M 3.3-5.3 Earthquakes: Implications for Scaling, Corner Frequency and Stress Drop

    NASA Astrophysics Data System (ADS)

    Archuleta, R. J.; Ji, C.

    2016-12-01

    Based on 3827 records of peak horizontal ground motions in the NGA-West2 database we computed linear regressions for Log PGA, Log PGV and the ratio PGA/2πPGV (which we call dominant frequency, DomF) versus moment magnitude for M 3.3-5.3 earthquakes. The slopes are nearly one for Log PGA and Log PGV and negative one for PGA/PGV. For magnitudes 5.3 and smaller the source can be treated as a point source. Using these regressions and an expression between the half peak-to-peak amplitude of Wood Anderson records (PWA) and moment magnitude, we have deduced an `apparent' moment rate function (aMRF) that increases quadratically in time until it reaches its maximum at time tp after which it decays linearly until a final duration td. For t*=0.054 s and with parameters tp and td scaling with seismic moment, tp(M0) = 0.03[M0/ M0(M=3.3)]1/7.0 and td(M0) = 0.31[M0/ M0(M=3.3)]1/3.3 . all the magnitude dependence within M 3.3-5.3 can be explained. The Fourier amplitude spectrum (FAS) of the aMRF has two corner frequencies connected by an intermediate slope of f-1. The smaller corner frequency fC 1/ td, i.e., a corner frequency related to the full duration. Stress drop associated with the average over the fault scales weakly with seismic moment Δσ M00.09. The larger corner frequency is proportional to 1/ tp. We also find that DomF ≈ 1/[2.2(tp(M0) + t*)], thus there is a strong tradeoff between tp and t*. The higher corner frequency and the intermediate slope in the spectrum could be completely obscured by t* for t* 0.04-0.06 s, producing a Brune-type spectrum. If so, it will be practically impossible to retrieve the true spectrum. Because the fC derived from the spectrum is controlled by td while PGA and PGV are controlled mostly by the time scale tp, this aMRF could explain the difference in uncertainty of the mean stress drop inferred from peak ground motion data and that inferred from displacement amplitude spectra. This aMRF is consistent with a rupture that initiates from

  3. Distribution of very low frequency earthquakes in the Nankai accretionary prism influenced by a subducting-ridge

    NASA Astrophysics Data System (ADS)

    Toh, Akiko; Obana, Koichiro; Araki, Eiichiro

    2018-01-01

    We investigated the distribution of very low frequency earthquakes (VLFEs) that occurred in the shallow accretionary prism of the eastern Nankai trough during one week of VLFE activity in October 2015. They were recorded very close from the sources by an array of broadband ocean bottom seismometers (BBOBSs) equipped in Dense Oceanfloor Network system for Earthquakes and Tsunamis 1 (DONET1). The locations of VLFEs estimated using a conventional envelope correlation method appeared to have a large scatter, likely due to effects of 3D structures near the seafloor and/or sources that the method could not handle properly. Therefore, we assessed their relative locations by introducing a hierarchal clustering analysis based on patterns of relative peak times of envelopes within the array measured for each VLFE. The results suggest that, in the northeastern side of the network, all the detected VLFEs occur 30-40 km landward of the trench axis, near the intersection of a splay fault with the seafloor. Some likely occurred along the splay fault. On the other hand, many VLFEs occur closer to the trench axis in the southwestern side, likely along the plate boundary, and the VLFE activity in the shallow splay fault appears less intense, compared to the northeastern side. Although this could be a snap-shot of activity that becomes more uniform over longer-term, the obtained distribution can be reasonably explained by the change in shear stresses and pore pressures caused by a subducting-ridge below the northeastern side of DONET1. The change in stress state along the strike of the plate boundary, inferred from the obtained VLFE distribution, should be an important indicator of the strain release pattern and localised variations in the tsunamigenic potential of this region.

  4. Modelling Psychological Responses to the Great East Japan Earthquake and Nuclear Incident

    PubMed Central

    Goodwin, Robin; Takahashi, Masahito; Sun, Shaojing; Gaines, Stanley O.

    2012-01-01

    The Great East Japan (Tōhoku/Kanto) earthquake of March 2011was followed by a major tsunami and nuclear incident. Several previous studies have suggested a number of psychological responses to such disasters. However, few previous studies have modelled individual differences in the risk perceptions of major events, or the implications of these perceptions for relevant behaviours. We conducted a survey specifically examining responses to the Great Japan earthquake and nuclear incident, with data collected 11–13 weeks following these events. 844 young respondents completed a questionnaire in three regions of Japan; Miyagi (close to the earthquake and leaking nuclear plants), Tokyo/Chiba (approximately 220 km from the nuclear plants), and Western Japan (Yamaguchi and Nagasaki, some 1000 km from the plants). Results indicated significant regional differences in risk perception, with greater concern over earthquake risks in Tokyo than in Miyagi or Western Japan. Structural equation analyses showed that shared normative concerns about earthquake and nuclear risks, conservation values, lack of trust in governmental advice about the nuclear hazard, and poor personal control over the nuclear incident were positively correlated with perceived earthquake and nuclear risks. These risk perceptions further predicted specific outcomes (e.g. modifying homes, avoiding going outside, contemplating leaving Japan). The strength and significance of these pathways varied by region. Mental health and practical implications of these findings are discussed in the light of the continuing uncertainties in Japan following the March 2011 events. PMID:22666380

  5. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed Central

    Kanamori, H

    1996-01-01

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657

  6. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  7. Jumping over the hurdles to effectively communicate the Operational Earthquake Forecast

    NASA Astrophysics Data System (ADS)

    McBride, S.; Wein, A. M.; Becker, J.; Potter, S.; Tilley, E. N.; Gerstenberger, M.; Orchiston, C.; Johnston, D. M.

    2016-12-01

    Probabilities, uncertainties, statistics, science, and threats are notoriously difficult topics to communicate with members of the public. The Operational Earthquake Forecast (OEF) is designed to provide an understanding of potential numbers and sizes of earthquakes and the communication of it must address all of those challenges. Furthermore, there are other barriers to effective communication of the OEF. These barriers include the erosion of trust in scientists and experts, oversaturation of messages, fear and threat messages magnified by the sensalisation of the media, fractured media environments and online echo chambers. Given the complexities and challenges of the OEF, how can we overcome barriers to effective communication? Crisis and risk communication research can inform the development of communication strategies to increase the public understanding and use of the OEF, when applied to the opportunities and challenges of practice. We explore ongoing research regarding how the OEF can be more effectively communicated - including the channels, tools and message composition to engage with a variety of publics. We also draw on past experience and a study of OEF communication during the Canterbury Earthquake Sequence (CES). We demonstrate how research and experience has guided OEF communications during subsequent events in New Zealand, including the M5.7 Valentine's Day earthquake in 2016 (CES), M6.0 Wilberforce earthquake in 2015, and the Cook Strait/Lake Grassmere earthquakes in 2013. We identify the successes and lessons learned of the practical communication of the OEF. Finally, we present future projects and directions in the communication of OEF, informed by both practice and research.

  8. Moment magnitude, local magnitude and corner frequency of small earthquakes nucleating along a low angle normal fault in the Upper Tiber valley (Italy)

    NASA Astrophysics Data System (ADS)

    Munafo, I.; Malagnini, L.; Chiaraluce, L.; Valoroso, L.

    2015-12-01

    The relation between moment magnitude (MW) and local magnitude (ML) is still a debated issue (Bath, 1966, 1981; Ristau et al., 2003, 2005). Theoretical considerations and empirical observations show that, in the magnitude range between 3 and 5, MW and ML scale 1∶1. Whilst for smaller magnitudes this 1∶1 scaling breaks down (Bethmann et al. 2011). For accomplishing this task we analyzed the source parameters of about 1500 (30.000 waveforms) well-located small earthquakes occurred in the Upper Tiber Valley (Northern Apennines) in the range of -1.5≤ML≤3.8. In between these earthquakes there are 300 events repeatedly rupturing the same fault patch generally twice within a short time interval (less than 24 hours; Chiaraluce et al., 2007). We use high-resolution short period and broadband recordings acquired between 2010 and 2014 by 50 permanent seismic stations deployed to monitor the activity of a regional low angle normal fault (named Alto Tiberina fault, ATF) in the framework of The Alto Tiberina Near Fault Observatory project (TABOO; Chiaraluce et al., 2014). For this study the direct determination of MW for small earthquakes is essential but unfortunately the computation of MW for small earthquakes (MW < 3) is not a routine procedure in seismology. We apply the contributions of source, site, and crustal attenuation computed for this area in order to obtain precise spectral corrections to be used in the calculation of small earthquakes spectral plateaus. The aim of this analysis is to achieve moment magnitudes of small events through a procedure that uses our previously calibrated crustal attenuation parameters (geometrical spreading g(r), quality factor Q(f), and the residual parameter k) to correct for path effects. We determine the MW-ML relationships in two selected fault zones (on-fault and fault-hanging-wall) of the ATF by an orthogonal regression analysis providing a semi-automatic and robust procedure for moment magnitude determination within a

  9. Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.

    2017-09-01

    We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction

  10. Relationship between large slip area and static stress drop of aftershocks of inland earthquake :Example of the 2007 Noto Hanto earthquake

    NASA Astrophysics Data System (ADS)

    Urano, S.; Hiramatsu, Y.; Yamada, T.

    2013-12-01

    The 2007 Noto Hanto earthquake (MJMA 6.9; hereafter referred to the main shock) occurred at 0:41(UTC) on March 25, 2007 at a depth of 11km beneath the west coast of Noto Peninsula, central Japan. The dominant slip of the main shock was on a reverse fault with a right-lateral slip and the large slip area was distributed from hypocenter to the shallow part on the fault plane (Horikawa, 2008). The aftershocks are distributed not only in the small slip area but also in the large slip area (Hiramatsu et al., 2011). In this study, we estimate static stress drops of aftershocks on the fault plane of the main shock. We discuss the relationship between the static stress drops of the aftershocks and the large slip area of the main shock by investigating spatial pattern of the values of the static stress drops. We use the waveform data obtained by the group for the joint aftershock observations of the 2007 Noto Hanto Earthquake (Sakai et al., 2007). The sampling frequency of the waveform data is 100 Hz or 200 Hz. Focusing on similar aftershocks reported by Hiramatsu et al. (2011), we analyze static stress drops by using the method of empirical Green's function (EGF) (Hough, 1997) as follows. The smallest earthquake (MJMA≥2.0) of each group of similar earthquakes is set to the EGF earthquake, and the largest earthquake (MJMA≥2.5) is set to the target earthquake. We then deconvolve the waveform of an interested earthquake with that of the EGF earthquake at each station and obtain the spectral ratio of the sources that cancels the propagation effects (path and site effects). Following the procedure of Yamada et al. (2010), we finally estimate static stress drops for P- and S-waves from corner frequencies of the spectral ratio by using a model of Madariaga (1976). The estimated average value of static stress drop is 8.2×1.3 MPa (8.6×2.2 MPa for P-wave and 7.8×1.3 MPa for S-wave). These values are coincident approximately with the static stress drop of aftershocks of other

  11. Developing ShakeCast statistical fragility analysis framework for rapid post-earthquake assessment

    USGS Publications Warehouse

    Lin, K.-W.; Wald, D.J.

    2012-01-01

    When an earthquake occurs, the U. S. Geological Survey (USGS) ShakeMap estimates the extent of potentially damaging shaking and provides overall information regarding the affected areas. The USGS ShakeCast system is a freely-available, post-earthquake situational awareness application that automatically retrieves earthquake shaking data from ShakeMap, compares intensity measures against users’ facilities, sends notifications of potential damage to responsible parties, and generates facility damage assessment maps and other web-based products for emergency managers and responders. We describe notable improvements of the ShakeMap and the ShakeCast applications. We present a design for comprehensive fragility implementation, integrating spatially-varying ground-motion uncertainties into fragility curves for ShakeCast operations. For each facility, an overall inspection priority (or damage assessment) is assigned on the basis of combined component-based fragility curves using pre-defined logic. While regular ShakeCast users receive overall inspection priority designations for each facility, engineers can access the full fragility analyses for further evaluation.

  12. Possible Short-Term Precursors of Strong Crustal Earthquakes in Japan based on Data from the Ground Stations of Vertical Ionospheric Sounding

    NASA Astrophysics Data System (ADS)

    Korsunova, L. P.; Khegai, V. V.

    2018-01-01

    We have studied changes in the ionosphere prior to strong crustal earthquakes with magnitudes of M ≥ 6.5 based on the data from the ground-based stations of vertical ionospheric sounding Kokobunji, Akita, and Wakkanai for the period 1968-2004. The data are analyzed based on hourly measurements of the virtual height and frequency parameters of the sporadic E layer and critical frequency of the regular F2 layer over the course of three days prior to the earthquakes. In the studied intervals of time before all earthquakes, anomalous changes were discovered both in the frequency parameters of the Es and F2 ionospheric layers and in the virtual height of the sporadic E layer; the changes were observed on the same day at stations spaced apart by several hundred kilometers. A high degree of correlation is found between the lead-time of these ionospheric anomalies preceding the seismic impact and the magnitude of the subsequent earthquakes. It is concluded that such ionospheric disturbances can be short-term ionospheric precursors of earthquakes.

  13. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  14. Hydroacoustic monitoring of seafloor earthquake and cryogenic sounds in the Bransfield Strait, Antarctica

    NASA Astrophysics Data System (ADS)

    Park, M.; Lee, W.; Dziak, R. P.; Matsumoto, H.; Bohnenstiehl, D. R.; Haxel, J. H.

    2008-12-01

    To record signals from submarine tectonic activity and ice-generated sound around the Antarctic Peninsula, we have operated an Autonomous Underwater Hydrophone (AUH) array from 2005 to 2007. The objectives of this experiment are to improve detection capability in the study area which is poorly covered by global seismic networks and to reveal characteristics of cryogenic sound which is hard to detect using low-latitude hydrophone array. NEIC has reported ~10-20 earthquakes per year in this region, while the efficiency of sound propagation in the ocean allows detection of greater than two orders of magnitude more earthquakes. A total of 5,160 earthquakes including 12 earthquake swarms are located during the deployment period. A total of 6 earthquake swarms (3,008) occurred in the western part of the Bransfield Strait (WBS), show an epicenter migration of 1-2 km/hr, exhibit a deficiency in high-frequency energy, and occurred near submarine volcanic centers along the back-arc rift axis. Cross-correlation analysis with ocean and solid earth tides indicates the WBS seismicity is modulated by tidal stress, where volcanic earthquake activity reflects variations in tidal forcing than do tectonic earthquakes. On-the-other hand, earthquake swarms from the eastern part of the BS (EBS) show features typical of tectonic earthquakes such as widely distributed epicenters with no clear spatio-temporal pattern and full-spectrum (broadband) signals. These results are consistent with previous crustal models indicating the WBS is undergoing volcanically dominated rifting, whereas rifting in the EBS is tectonically driven. A total of 5,929 ice-generated signals were also derived from the data and are the first detailed observation of various cryogenic phenomena in the region. These cryogenic signals exhibit unusual, tremor-like signals with a high-frequency fundamental (~40 Hz) and 5-6 overtones caused by iceberg resonance, as well as impulsive, short-duration "icequakes" caused by ice

  15. Stress transferred by the 1995 Mw = 6.9 Kobe, Japan, shock: Effect on aftershocks and future earthquake probabilities

    USGS Publications Warehouse

    Toda, S.; Stein, R.S.; Reasenberg, P.A.; Dieterich, J.H.; Yoshida, A.

    1998-01-01

    The Kobe earthquake struck at the edge of the densely populated Osaka-Kyoto corridor in southwest Japan. We investigate how the earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities. We find that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in regions of stress decrease. We quantify this relationship by forming the spatial correlation between the seismicity rate change and the Coulomb stress change. The correlation is significant for stress changes greater than 0.2-1.0 bars (0.02-0.1 MPa), and the nonlinear dependence of seismicity rate change on stress change is compatible with a state- and rate-dependent formulation for earthquake occurrence. We extend this analysis to future mainshocks by resolving the stress changes on major faults within 100 km of Kobe and calculating the change in probability caused by these stress changes. Transient effects of the stress changes are incorporated by the state-dependent constitutive relation, which amplifies the permanent stress changes during the aftershock period. Earthquake probability framed in this manner is highly time-dependent, much more so than is assumed in current practice. Because the probabilities depend on several poorly known parameters of the major faults, we estimate uncertainties of the probabilities by Monte Carlo simulation. This enables us to include uncertainties on the elapsed time since the last earthquake, the repeat time and its variability, and the period of aftershock decay. We estimate that a calculated 3-bar (0.3-MPa) stress increase on the eastern section of the Arima-Takatsuki Tectonic Line (ATTL) near Kyoto causes fivefold increase in the 30-year probability of a subsequent large earthquake near Kyoto; a 2-bar (0.2-MPa) stress decrease on the western section of the ATTL results in a reduction in probability by a factor of 140 to

  16. On the reported magnetic precursor of the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Thomas, J.N.; Love, J.J.; Johnston, M.J.S.

    2009-01-01

    Among the most frequently cited reports in the science of earthquake prediction is that by Fraser-Smith et al. (1990) and Bernardi et al. (1991). They found anomalous enhancement of magnetic-field noise levels prior to the 18 October 1989 Loma Prieta earthquake in the ultra-low-frequency range (0.0110-10.001 Hz) from a ground-based sensor at Corralitos, CA, just 7 km from the earthquake epicenter. In this analysis, we re-examine all of the available Corralitos data (21 months from January 1989 to October 1990) and the logbook kept during this extended operational period. We also examine 1.0-Hz (1-s) data collected from Japan, 0.0167-Hz (1-min) data collected from the Fresno, CA magnetic observatory, and the global Kp magnetic-activity index. The Japanese data are of particular importance since their acquisition rate is sufficient to allow direct comparison with the lower-frequency bands of the Corralitos data. We identify numerous problems in the Corralitos data, evident from both straightforward examination of the Corralitos data on their own and by comparison with the Japanese and Fresno data sets. The most notable problems are changes in the baseline noise levels occurring during both the reported precursory period and at other times long before and after the earthquake. We conclude that the reported anomalous magnetic noise identified by Fraser-Smith et al. and Bernardi et al. is not related to the Loma Prieta earthquake but is an artifact of sensor-system malfunction. ?? 2008 Elsevier B.V.

  17. The Implications of Strike-Slip Earthquake Source Properties on the Transform Boundary Development Process

    NASA Astrophysics Data System (ADS)

    Neely, J. S.; Huang, Y.; Furlong, K.

    2017-12-01

    Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in

  18. Source parameters and effects of bandwidth and local geology on high- frequency ground motions observed for aftershocks of the northeastern Ohio earthquake of 31 January 1986

    USGS Publications Warehouse

    Glassmoyer, G.; Borcherdt, R.D.

    1990-01-01

    A 10-station array (GEOS) yielded recordings of exceptional bandwidth (400 sps) and resolution (up to 96 dB) for the aftershocks of the moderate (mb???4.9) earthquake that occurred on 31 January 1986 near Painesville, Ohio. Nine aftershocks were recorded with seismic moments ranging between 9 ?? 1016 and 3 ?? 1019 dyne-cm (MW: 0.6 to 2.3). The aftershock recordings at a site underlain by ???8m of lakeshore sediments show significant levels of high-frequency soil amplification of vertical motion at frequencies near 8, 20 and 70 Hz. Viscoelastic models for P and SV waves incident at the base of the sediments yield estimates of vertical P-wave response consistent with the observed high-frequency site resonances, but suggest additional detailed shear-wave logs are needed to account for observed S-wave response. -from Authors

  19. Octree-based Global Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.

    2017-12-01

    Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.

  20. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  1. Stochastic ground-motion simulation of two Himalayan earthquakes: seismic hazard assessment perspective

    NASA Astrophysics Data System (ADS)

    Harbindu, Ashish; Sharma, Mukat Lal; Kamal

    2012-04-01

    The earthquakes in Uttarkashi (October 20, 1991, M w 6.8) and Chamoli (March 8, 1999, M w 6.4) are among the recent well-documented earthquakes that occurred in the Garhwal region of India and that caused extensive damage as well as loss of life. Using strong-motion data of these two earthquakes, we estimate their source, path, and site parameters. The quality factor ( Q β ) as a function of frequency is derived as Q β ( f) = 140 f 1.018. The site amplification functions are evaluated using the horizontal-to-vertical spectral ratio technique. The ground motions of the Uttarkashi and Chamoli earthquakes are simulated using the stochastic method of Boore (Bull Seismol Soc Am 73:1865-1894, 1983). The estimated source, path, and site parameters are used as input for the simulation. The simulated time histories are generated for a few stations and compared with the observed data. The simulated response spectra at 5% damping are in fair agreement with the observed response spectra for most of the stations over a wide range of frequencies. Residual trends closely match the observed and simulated response spectra. The synthetic data are in rough agreement with the ground-motion attenuation equation available for the Himalayas (Sharma, Bull Seismol Soc Am 98:1063-1069, 1998).

  2. Probabilistic Seismic Hazard Assessment for Himalayan-Tibetan Region from Historical and Instrumental Earthquake Catalogs

    NASA Astrophysics Data System (ADS)

    Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui

    2018-02-01

    The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.

  3. Local Deformation Precursors of Large Earthquakes Derived from GNSS Observation Data

    NASA Astrophysics Data System (ADS)

    Kaftan, Vladimir; Melnikov, Andrey

    2017-12-01

    Research on deformation precursors of earthquakes was of immediate interest from the middle to the end of the previous century. The repeated conventional geodetic measurements, such as precise levelling and linear-angular networks, were used for the study. Many examples of studies referenced to strong seismic events using conventional geodetic techniques are presented in [T. Rikitake, 1976]. One of the first case studies of geodetic earthquake precursors was done by Yu.A. Meshcheryakov [1968]. Rare repetitions, insufficient densities and locations of control geodetic networks made difficult predicting future places and times of earthquakes occurrences. Intensive development of Global Navigation Satellite Systems (GNSS) during the recent decades makes research more effective. The results of GNSS observations in areas of three large earthquakes (Napa M6.1, USA, 2014; El Mayor Cucapah M7.2, USA, 2010; and Parkfield M6.0, USA, 2004) are treated and presented in the paper. The characteristics of land surface deformation before, during, and after earthquakes have been obtained. The results prove the presence of anomalous deformations near their epicentres. The temporal character of dilatation and shear strain changes show existence of spatial heterogeneity of deformation of the Earth’s surface from months to years before the main shock close to it and at some distance from it. The revealed heterogeneities can be considered as deformation precursors of strong earthquakes. According to historical data and proper research values of critical deformations which are offered to be used for seismic danger scale creation based on continuous GNSS observations are received in a reference to the mentioned large earthquakes. It is shown that the approach has restrictions owing to uncertainty of the moment in the beginning of deformation accumulation and the place of expectation of another seismic event. Verification and clarification of the derived conclusions are proposed.

  4. Very low frequency earthquakes (VLFEs) detected during episodic tremor and slip (ETS) events in Cascadia using a match filter method indicate repeating events

    NASA Astrophysics Data System (ADS)

    Hutchison, A. A.; Ghosh, A.

    2016-12-01

    Very low frequency earthquakes (VLFEs) occur in transitional zones of faults, releasing seismic energy in the 0.02-0.05 Hz frequency band over a 90 s duration and typically have magntitudes within the range of Mw 3.0-4.0. VLFEs can occur down-dip of the seismogenic zone, where they can transfer stress up-dip potentially bringing the locked zone closer to a critical failure stress. VLFEs also occur up-dip of the seismogenic zone in a region along the plate interface that can rupture coseismically during large megathrust events, such as the 2011 Tohoku-Oki earthquake [Ide et al., 2011]. VLFEs were first detected in Cascadia during the 2011 episodic tremor and slip (ETS) event, occurring coincidentally with tremor [Ghosh et al., 2015]. However, during the 2014 ETS event, VLFEs were spatially and temporally asynchronous with tremor activity [Hutchison and Ghosh, 2016]. Such contrasting behaviors remind us that the mechanics behind such events remain elusive, yet they are responsible for the largest portion of the moment release during an ETS event. Here, we apply a match filter method using known VLFEs as template events to detect additional VLFEs. Using a grid-search centroid moment tensor inversion method, we invert stacks of the resulting match filter detections to ensure moment tensor solutions are similar to that of the respective template events. Our ability to successfully employ a match filter method to VLFE detection in Cascadia intrinsically indicates that these events can be repeating, implying that the same asperities are likely responsible for generating multiple VLFEs.

  5. Possible Electromagnetic Effects on Abnormal Animal Behavior Before an Earthquake

    PubMed Central

    Hayakawa, Masashi

    2013-01-01

    Simple Summary Possible electromagnetic effects on abnormal animal behavior before earthquakes. Abstract The former statistical properties summarized by Rikitake (1998) on unusual animal behavior before an earthquake (EQ) have first been presented by using two parameters (epicentral distance (D) of an anomaly and its precursor (or lead) time (T)). Three plots are utilized to characterize the unusual animal behavior; (i) EQ magnitude (M) versus D, (ii) log T versus M, and (iii) occurrence histogram of log T. These plots are compared with the corresponding plots for different seismo-electromagnetic effects (radio emissions in different frequency ranges, seismo-atmospheric and -ionospheric perturbations) extensively obtained during the last 15–20 years. From the results of comparisons in terms of three plots, it is likely that lower frequency (ULF (ultra-low-frequency, f ≤ 1 Hz) and ELF (extremely-low-frequency, f ≤ a few hundreds Hz)) electromagnetic emissions exhibit a very similar temporal evolution with that of abnormal animal behavior. It is also suggested that a quantity of field intensity multiplied by the persistent time (or duration) of noise would play the primary role in abnormal animal behavior before an EQ. PMID:26487307

  6. Earthquake focal mechanism forecasting in Italy for PSHA purposes

    NASA Astrophysics Data System (ADS)

    Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola

    2018-01-01

    In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.

  7. The 1999 Mw 7.1 Hector Mine, California, earthquake: A test of the stress shadow hypothesis?

    USGS Publications Warehouse

    Harris, R.A.; Simpson, R.W.

    2002-01-01

    We test the stress shadow hypothesis for large earthquake interactions by examining the relationship between two large earthquakes that occurred in the Mojave Desert of southern California, the 1992 Mw 7.3 Landers and 1999 Mw 7.1 Hector Mine earthquakes. We want to determine if the 1999 Hector Mine earthquake occurred at a location where the Coulomb stress was increased (earthquake advance, stress trigger) or decreased (earthquake delay, stress shadow) by the previous large earthquake. Using four models of the Landers rupture and a range of possible hypocentral planes for the Hector Mine earthquake, we discover that most scenarios yield a Landers-induced relaxation (stress shadow) on the Hector Mine hypocentral plane. Although this result would seem to weigh against the stress shadow hypothesis, the results become considerably more uncertain when the effects of a nearby Landers aftershock, the 1992 ML 5.4 Pisgah earthquake, are taken into account. We calculate the combined static Coulomb stress changes due to the Landers and Pisgah earthquakes to range from -0.3 to +0.3 MPa (- 3 to +3 bars) at the possible Hector Mine hypocenters, depending on choice of rupture model and hypocenter. These varied results imply that the Hector Mine earthquake does not provide a good test of the stress shadow hypothesis for large earthquake interactions. We use a simple approach, that of static dislocations in an elastic half-space, yet we still obtain a wide range of both negative and positive Coulomb stress changes. Our findings serve as a caution that more complex models purporting to explain the triggering or shadowing relationship between the 1992 Landers and 1999 Hector Mine earthquakes need to also consider the parametric and geometric uncertainties raised here.

  8. Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment

    NASA Astrophysics Data System (ADS)

    Melgar Moctezuma, Diego

    This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.

  9. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  10. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to

  11. Japan unified hIgh-resolution relocated catalog for earthquakes (JUICE): Crustal seismicity beneath the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Yano, Tomoko E.; Takeda, Tetsuya; Matsubara, Makoto; Shiomi, Katsuhiko

    2017-04-01

    We have generated a high-resolution catalog called the ;Japan Unified hIgh-resolution relocated Catalog for Earthquakes; (JUICE), which can be used to evaluate the geometry and seismogenic depth of active faults in Japan. We relocated > 1.1 million hypocenters from the NIED Hi-net catalog for events which occurred between January 2001 and December 2012, to a depth of 40 km. We apply a relative hypocenter determination method to the data in each grid square, in which entire Japan is divided into 1257 grid squares to parallelize the relocation procedure. We used a double-difference method, incorporating cross-correlating differential times as well as catalog differential times. This allows us to resolve, in detail, a seismicity distribution for the entire Japanese Islands. We estimated location uncertainty by a statistical resampling method, using Jackknife samples, and show that the uncertainty can be within 0.37 km in the horizontal and 0.85 km in the vertical direction with a 90% confidence interval for areas with good station coverage. Our seismogenic depth estimate agrees with the lower limit of the hypocenter distribution for a recent earthquake on the Kamishiro fault (2014, Mj 6.7), which suggests that the new catalog should be useful for estimating the size of future earthquakes for inland active faults.

  12. The spatial distribution of earthquake stress rotations following large subduction zone earthquakes

    USGS Publications Warehouse

    Hardebeck, Jeanne L.

    2017-01-01

    Rotations of the principal stress axes due to great subduction zone earthquakes have been used to infer low differential stress and near-complete stress drop. The spatial distribution of coseismic and postseismic stress rotation as a function of depth and along-strike distance is explored for three recent M ≥ 8.8 subduction megathrust earthquakes. In the down-dip direction, the largest coseismic stress rotations are found just above the Moho depth of the overriding plate. This zone has been identified as hosting large patches of large slip in great earthquakes, based on the lack of high-frequency radiated energy. The large continuous slip patches may facilitate near-complete stress drop. There is seismological evidence for high fluid pressures in the subducted slab around the Moho depth of the overriding plate, suggesting low differential stress levels in this zone due to high fluid pressure, also facilitating stress rotations. The coseismic stress rotations have similar along-strike extent as the mainshock rupture. Postseismic stress rotations tend to occur in the same locations as the coseismic stress rotations, probably due to the very low remaining differential stress following the near-complete coseismic stress drop. The spatial complexity of the observed stress changes suggests that an analytical solution for finding the differential stress from the coseismic stress rotation may be overly simplistic, and that modeling of the full spatial distribution of the mainshock static stress changes is necessary.

  13. Investigation of an earthquake swarm near Trinidad, Colorado, August-October 2001

    USGS Publications Warehouse

    Meremonte, Mark E.; Lahr, John C.; Frankel, Arthur D.; Dewey, James W.; Crone, Anthony J.; Overturf, Dee E.; Carver, David L.; Bice., W. Thomas

    2002-01-01

    A swarm of 12 widely felt earthquakes occurred between August 28 and September 21, 2001, in the area west of the town of Trinidad, Colorado. The earthquakes ranged in magnitude between 2.8 and 4.6, and the largest event occurred on September 5, eight days after the initial M 3.4 event. The nearest permanent seismograph station to the swarm is about 290 km away, resulting in large uncertainties in the location and depth of these events. To better locate and characterize the earthquakes in this swarm, we deployed a total of 12 portable seismographs in the area of the swarm starting on September 6. Here we report on data from this portable network that was recorded between September 7 and October 15. During this time period, we have high-quality data from 39 earthquakes. The hypocenters of these earthquakes cluster to define a 6 km long northeast-trending fault plane that dips steeply (70-80?) to the southeast. The upper bound of well-constrained hypocenters is near 3 km depth and lower bound is near 6 km depth. Preliminary fault mechanisms suggest normal faulting with movement down to the southeast. Significant historical earthquakes have occurred in the Trinidad region in 1966 and 1973. Reexamination of felt reports from these earthquakes suggest that the 1973 events may have occurred in the same area, and possibly on the same fault, as the 2001 swarm. In recent years, a large volume of excess water that is produced in conjunction with coal-bed methane gas production has been returned to the subsurface in fluid disposal wells in the area of the earthquake swarm. Because of the proximity of these disposal wells to the earthquakes, local residents and officials are concerned that the fluid disposal might have triggered the earthquakes. We have evaluated the characteristics of the seismicity using criteria proposed by Davis and Frohlich (1993) as diagnostic of seismicity induced by fluid injection. We conclude that the characteristics of the seismicity and the fluid

  14. Hydraulic fracturing volume is associated with induced earthquake productivity in the Duvernay play.

    PubMed

    Schultz, R; Atkinson, G; Eaton, D W; Gu, Y J; Kao, H

    2018-01-19

    A sharp increase in the frequency of earthquakes near Fox Creek, Alberta, began in December 2013 in response to hydraulic fracturing. Using a hydraulic fracturing database, we explore relationships between injection parameters and seismicity response. We show that induced earthquakes are associated with completions that used larger injection volumes (10 4 to 10 5 cubic meters) and that seismic productivity scales linearly with injection volume. Injection pressure and rate have an insignificant association with seismic response. Further findings suggest that geological factors play a prominent role in seismic productivity, as evidenced by spatial correlations. Together, volume and geological factors account for ~96% of the variability in the induced earthquake rate near Fox Creek. This result is quantified by a seismogenic index-modified frequency-magnitude distribution, providing a framework to forecast induced seismicity. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. Golden Gate Bridge response: a study with low-amplitude data from three earthquakes

    USGS Publications Warehouse

    Çelebi, Mehmet

    2012-01-01

    The dynamic response of the Golden Gate Bridge, located north of San Francisco, CA, has been studied previously using ambient vibration data and finite element models. Since permanent seismic instrumentation was installed in 1993, only small earthquakes that originated at distances varying between ~11 to 122 km have been recorded. Nonetheless, these records prompted this study of the response of the bridge to low amplitude shaking caused by three earthquakes. Compared to previous ambient vibration studies, the earthquake response data reveal a slightly higher fundamental frequency (shorter-period) for vertical vibration of the bridge deck center span (~7.7–8.3 s versus 8.2–10.6 s), and a much higher fundamental frequency (shorter period) for the transverse direction of the deck (~11.24–16.3 s versus ~18.2 s). In this study, it is also shown that these two periods are dominant apparent periods representing interaction between tower, cable, and deck.

  16. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Meunier, Patrick; Hovius, Niels

    2017-07-01

    We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

  17. Hazard Assessment and Early Warning of Tsunamis: Lessons from the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Satake, K.

    2012-12-01

    . Tsunami hazard assessments or long-term forecast of earthquakes have not considered such a triggering or simultaneous occurrence of different types of earthquakes. The large tsunami at the Fukushima nuclear power station was due to the combination of the deep and shallow slip. Disaster prevention for low-frequency but large-scale hazard must be considered. The Japanese government established a general policy to for two levels: L1 and L2. The L2 tsunamis are the largest possible tsunamis with low frequency of occurrence, but cause devastating disaster once they occur. For such events, saving people's lives is the first priority and soft measures such as tsunami hazard maps, evacuation facilities or disaster education will be prepared. The L1 tsunamis are expected to occur more frequently, typically once in a few decades, for which hard countermeasures such as breakwater must be prepared to protect lives and properties of residents as well as economic and industrial activities.

  18. Antarctic icequakes triggered by the 2010 Maule earthquake in Chile

    NASA Astrophysics Data System (ADS)

    Peng, Zhigang; Walter, Jacob I.; Aster, Richard C.; Nyblade, Andrew; Wiens, Douglas A.; Anandakrishnan, Sridhar

    2014-09-01

    Seismic waves from distant, large earthquakes can almost instantaneously trigger shallow micro-earthquakes and deep tectonic tremor as they pass through Earth's crust. Such remotely triggered seismic activity mostly occurs in tectonically active regions. Triggered seismicity is generally considered to reflect shear failure on critically stressed fault planes and is thought to be driven by dynamic stress perturbations from both Love and Rayleigh types of surface seismic wave. Here we analyse seismic data from Antarctica in the six hours leading up to and following the 2010 Mw 8.8 Maule earthquake in Chile. We identify many high-frequency seismic signals during the passage of the Rayleigh waves generated by the Maule earthquake, and interpret them as small icequakes triggered by the Rayleigh waves. The source locations of these triggered icequakes are difficult to determine owing to sparse seismic network coverage, but the triggered events generate surface waves, so are probably formed by near-surface sources. Our observations are consistent with tensile fracturing of near-surface ice or other brittle fracture events caused by changes in volumetric strain as the high-amplitude Rayleigh waves passed through. We conclude that cryospheric systems can be sensitive to large distant earthquakes.

  19. Fundamental uncertainty limit of optical flow velocimetry according to Heisenberg's uncertainty principle.

    PubMed

    Fischer, Andreas

    2016-11-01

    Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.

  20. Fault healing and earthquake spectra from stick slip sequences in the laboratory and on active faults

    NASA Astrophysics Data System (ADS)

    McLaskey, G. C.; Glaser, S. D.; Thomas, A.; Burgmann, R.

    2011-12-01

    Repeating earthquake sequences (RES) are thought to occur on isolated patches of a fault that fail in repeated stick-slip fashion. RES enable researchers to study the effect of variations in earthquake recurrence time and the relationship between fault healing and earthquake generation. Fault healing is thought to be the physical process responsible for the 'state' variable in widely used rate- and state-dependent friction equations. We analyze RES created in laboratory stick slip experiments on a direct shear apparatus instrumented with an array of very high frequency (1KHz - 1MHz) displacement sensors. Tests are conducted on the model material polymethylmethacrylate (PMMA). While frictional properties of this glassy polymer can be characterized with the rate- and state- dependent friction laws, the rate of healing in PMMA is higher than room temperature rock. Our experiments show that in addition to a modest increase in fault strength and stress drop with increasing healing time, there are distinct spectral changes in the recorded laboratory earthquakes. Using the impact of a tiny sphere on the surface of the test specimen as a known source calibration function, we are able to remove the instrument and apparatus response from recorded signals so that the source spectrum of the laboratory earthquakes can be accurately estimated. The rupture of a fault that was allowed to heal produces a laboratory earthquake with increased high frequency content compared to one produced by a fault which has had less time to heal. These laboratory results are supported by observations of RES on the Calaveras and San Andreas faults, which show similar spectral changes when recurrence time is perturbed by a nearby large earthquake. Healing is typically attributed to a creep-like relaxation of the material which causes the true area of contact of interacting asperity populations to increase with time in a quasi-logarithmic way. The increase in high frequency seismicity shown here