Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
NASA Astrophysics Data System (ADS)
Yun, S.; Koketsu, K.; Aoki, Y.
2014-12-01
The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity structure of this region into account.
NASA Astrophysics Data System (ADS)
Kun, C.
2015-12-01
Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhou, L.; Liu, J.
2013-12-01
Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity
Anomalies of rupture velocity in deep earthquakes
NASA Astrophysics Data System (ADS)
Suzuki, M.; Yagi, Y.
2010-12-01
Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.
On near-source earthquake triggering
Parsons, T.; Velasco, A.A.
2009-01-01
When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.
NASA Astrophysics Data System (ADS)
Meng, L.; Ampuero, J. P.; Rendon, H.
2010-12-01
Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Detailed source process of the 2007 Tocopilla earthquake.
NASA Astrophysics Data System (ADS)
Peyrat, S.; Madariaga, R.; Campos, J.; Asch, G.; Favreau, P.; Bernard, P.; Vilotte, J.
2008-05-01
We investigated the detail rupture process of the Tocopilla earthquake (Mw 7.7) of the 14 November 2007 and of the main aftershocks that occurred in the southern part of the North Chile seismic gap using strong motion data. The earthquake happen in the middle of the permanent broad band and strong motion network IPOC newly installed by GFZ and IPGP, and of a digital strong-motion network operated by the University of Chile. The Tocopilla earthquake is the last large thrust subduction earthquake that occurred since the major Iquique 1877 earthquake which produced a destructive tsunami. The Arequipa (2001) and Antofagasta (1995) earthquakes already ruptured the northern and southern parts of the gap, and the intraplate intermediate depth Tarapaca earthquake (2005) may have changed the tectonic loading of this part of the Peru-Chile subduction zone. For large earthquakes, the depth of the seismic rupture is bounded by the depth of the seismogenic zone. What controls the horizontal extent of the rupture for large earthquakes is less clear. Factors that influence the extent of the rupture include fault geometry, variations of material properties and stress heterogeneities inherited from the previous ruptures history. For subduction zones where structures are not well known, what may have stopped the rupture is not obvious. One crucial problem raised by the Tocopilla earthquake is to understand why this earthquake didn't extent further north, and at south, what is the role of the Mejillones peninsula that seems to act as a barrier. The focal mechanism was determined using teleseismic waveforms inversion and with a geodetic analysis (cf. Campos et al.; Bejarpi et al., in the same session). We studied the detailed source process using the strong motion data available. This earthquake ruptured the interplate seismic zone over more than 150 km and generated several large aftershocks, mainly located south of the rupture area. The strong-motion data show clearly two S-waves arrivals, allowing the localization of the 2 sources. The main shock started north of the segment close to Tocopilla. The rupture propagated southward. The second source was identified to start about 20 seconds later and was located 50 km south from the hypocenter. The network configuration provides a good resolution for the inverted slip distribution in the north-south direction, but a lower resolution for the east-west extent of the slip. However, this study of the source process of this earthquake shows a complex source with at least two slip asperities of different dynamical behavior.
NASA Astrophysics Data System (ADS)
Ide, Satoshi; Maury, Julie
2018-04-01
Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.
NASA Astrophysics Data System (ADS)
Uchide, T.; Shearer, P. M.
2009-12-01
Introduction Uchide and Ide [SSA Spring Meeting, 2009] proposed a new framework for studying the scaling and overall nature of earthquake rupture growth in terms of cumulative moment functions. For better understanding of rupture growth processes, spatiotemporally local processes are also important. The nature of high-frequency (HF) radiation has been investigated for some time, but its role in the earthquake rupture process is still unclear. A wavelet analysis reveals that the HF radiation (e.g., 4 - 32 Hz) of the 2004 Parkfield earthquake is peaky, which implies that the sources of the HF radiation are isolated in space and time. We experiment with applying a matched filter analysis using small template events occurring near the target event rupture area to test whether it can reveal the HF radiation sources for a regular large earthquake. Method We design a matched filter for multiple components and stations. Shelly et al. [2007] attempted identifying low-frequency earthquakes (LFE) in non-volcanic tremor waveforms by stacking the correlation coefficients (CC) between the seismograms of the tremor and the LFE. Differing from their method, our event detection indicator is the CC between the seismograms of the target and template events recorded at the same stations, since the key information for detecting the sources will be the arrival-time differences and the amplitude ratios among stations. Data from both the target and template events are normalized by the maximum amplitude of the seismogram of the template event in the cross-correlation time window. This process accounts for the radiation pattern and distance between the source and stations. At each small earthquake target, high values in the CC time series suggest the possibility of HF radiation during the mainshock rupture from a similar location to the target event. Application to the 2004 Parkfield earthquake We apply the matched filter method to the 2004 Parkfield earthquake (Mw 6.0). We use seismograms recorded at the 13 stations of UPSAR [Fletcher et al, 1992]. At each station, both acceleration and velocity sensors are installed, therefore both large and small earthquakes are observable. We employ 184 earthquakes (M 2.0 - 3.5) as template events, and 0.5 s of the P waves on the vertical components and the S waves on all three components. The data are bandpass-filtered between 4 and 16 Hz. One source is detected at 4 s and 12 km northwest from the hypocenter. Although the CC has generally low values, its peak is more than five times larger than its standard deviation and thus remarkably high. This source is close to the secondary onset revealed by a back-projection analysis of 2 - 8 Hz data from Parkfield strong motion stations [Allmann and Shearer, 2007]. While the back-projection approach images the peak of HF radiation, our method detects the onset time, which is slightly different. Another source is located at 1.2 s and 2 km southeast from the hypocenter, which may correspond to deceleration of the initial rupture. Comparisons of the derived HF radiation sources to the whole rupture process will help us reveal general earthquake source dynamics.
Choy, G.L.; Boatwright, J.
2007-01-01
The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Zhang, R.R.; Ma, S.; Hartzell, S.
2003-01-01
In this article we use empirical mode decomposition (EMD) to characterize the 1994 Northridge, California, earthquake records and investigate the signatures carried over from the source rupture process. Comparison of the current study results with existing source inverse solutions that use traditional data processing suggests that the EMD-based characterization contains information that sheds light on aspects of the earthquake rupture process. We first summarize the fundamentals of the EMD and illustrate its features through the analysis of a hypothetical and a real record. Typically, the Northridge strong-motion records are decomposed into eight or nine intrinsic mode functions (IMF's), each of which emphasizes a different oscillation mode with different amplitude and frequency content. The first IMF has the highest-frequency content; frequency content decreases with an increase in IMF component. With the aid of a finite-fault inversion method, we then examine aspects of the source of the 1994 Northridge earthquake that are reflected in the second to fifth IMF components. This study shows that the second IMF is predominantly wave motion generated near the hypocenter, with high-frequency content that might be related to a large stress drop associated with the initiation of the earthquake. As one progresses from the second to the fifth IMF component, there is a general migration of the source region away from the hypocenter with associated longer-period signals as the rupture propagates. This study suggests that the different IMF components carry information on the earthquake rupture process that is expressed in their different frequency bands.
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
NASA Technical Reports Server (NTRS)
Han, Shin-Chan; Sauber, Jeanne; Riva, Riccardo
2011-01-01
The 2011 great Tohoku-Oki earthquake, apart from shaking the ground, perturbed the motions of satellites orbiting some hundreds km away above the ground, such as GRACE, due to coseismic change in the gravity field. Significant changes in inter-satellite distance were observed after the earthquake. These unconventional satellite measurements were inverted to examine the earthquake source processes from a radically different perspective that complements the analyses of seismic and geodetic ground recordings. We found the average slip located up-dip of the hypocenter but within the lower crust, as characterized by a limited range of bulk and shear moduli. The GRACE data constrained a group of earthquake source parameters that yield increasing dip (7-16 degrees plus or minus 2 degrees) and, simultaneously, decreasing moment magnitude (9.17-9.02 plus or minus 0.04) with increasing source depth (15-24 kilometers). The GRACE solution includes the cumulative moment released over a month and demonstrates a unique view of the long-wavelength gravimetric response to all mass redistribution processes associated with the dynamic rupture and short-term postseismic mechanisms to improve our understanding of the physics of megathrusts.
Navigating Earthquake Physics with High-Resolution Array Back-Projection
NASA Astrophysics Data System (ADS)
Meng, Lingsen
Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.
Numerical simulation analysis on Wenchuan seismic strong motion in Hanyuan region
NASA Astrophysics Data System (ADS)
Chen, X.; Gao, M.; Guo, J.; Li, Z.; Li, T.
2015-12-01
69227 deaths, 374643 injured, 17923 people missing, direct economic losses 845.1 billion, and a large number houses collapse were caused by Wenchuan Ms8 earthquake in Sichuan Province on May 12, 2008, how to reproduce characteristics of its strong ground motion and predict its intensity distribution, which have important role to mitigate disaster of similar giant earthquake in the future. Taking Yunnan-Sichuan Province, Wenchuan town, Chengdu city, Chengdu basin and its vicinity as the research area, on the basis of the available three-dimensional velocity structure model and newly topography data results from ChinaArray of Institute of Geophysics, China Earthquake Administration, 2 type complex source rupture process models with the global and local source parameters are established, we simulated the seismic wave propagation of Wenchuan Ms8 earthquake throughout the whole three-dimensional region by the GMS discrete grid finite-difference techniques with Cerjan absorbing boundary conditions, and obtained the seismic intensity distribution in this region through analyzing 50×50 stations data (simulated ground motion output station). The simulated results indicated that: (1)Simulated Wenchuan earthquake ground motion (PGA) response and the main characteristics of the response spectrum are very similar to those of the real Wenchuan earthquake records. (2)Wenchuan earthquake ground motion (PGA) and the response spectra of the Plain are much greater than that of the left Mountain area because of the low velocity of the shallow surface media and the basin effect of the Chengdu basin structure. Simultaneously, (3) the source rupture process (inversion) with far-field P-wave, GPS data and InSAR information and the Longmenshan Front Fault (source rupture process) are taken into consideration in GMS numerical simulation, significantly different waveform and frequency component of the ground motion are obtained, though the strong motion waveform is distinct asymmetric, which should be much more real. It indicated that the Longmenshan Front Fault may be also involved in seismic activity during the long time(several minutes) Wenchuan earthquake process. (4) Simulated earthquake records in Hanyuan region are indeed very strong, which reveals source mechanism is one reason of Hanyuan intensity abnormaly.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake
NASA Astrophysics Data System (ADS)
Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten
2014-05-01
In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.
New perspectives on self-similarity for shallow thrust earthquakes
NASA Astrophysics Data System (ADS)
Denolle, Marine A.; Shearer, Peter M.
2016-09-01
Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.
NASA Astrophysics Data System (ADS)
Fan, Wenyuan; McGuire, Jeffrey J.
2018-05-01
An earthquake rupture process can be kinematically described by rupture velocity, duration and spatial extent. These key kinematic source parameters provide important constraints on earthquake physics and rupture dynamics. In particular, core questions in earthquake science can be addressed once these properties of small earthquakes are well resolved. However, these parameters of small earthquakes are poorly understood, often limited by available datasets and methodologies. The IRIS Community Wavefield Experiment in Oklahoma deployed ˜350 three component nodal stations within 40 km2 for a month, offering an unprecedented opportunity to test new methodologies for resolving small earthquake finite source properties in high resolution. In this study, we demonstrate the power of the nodal dataset to resolve the variations in the seismic wavefield over the focal sphere due to the finite source attributes of a M2 earthquake within the array. The dense coverage allows us to tightly constrain rupture area using the second moment method even for such a small earthquake. The M2 earthquake was a strike-slip event and unilaterally propagated towards the surface at 90 per cent local S- wave speed (2.93 km s-1). The earthquake lasted ˜0.019 s and ruptured Lc ˜70 m by Wc ˜45 m. With the resolved rupture area, the stress-drop of the earthquake is estimated as 7.3 MPa for Mw 2.3. We demonstrate that the maximum and minimum bounds on rupture area are within a factor of two, much lower than typical stress drop uncertainty, despite a suboptimal station distribution. The rupture properties suggest that there is little difference between the M2 Oklahoma earthquake and typical large earthquakes. The new three component nodal systems have great potential for improving the resolution of studies of earthquake source properties.
USGS GNSS Applications to Earthquake Disaster Response and Hazard Mitigation
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Murray, J. R.; Minson, S. E.
2015-12-01
Rapid characterization of earthquake rupture is important during a disaster because it establishes which fault ruptured and the extent and amount of fault slip. These key parameters, in turn, can augment in situ seismic sensors for identifying disruption to lifelines as well as localized damage along the fault break. Differential GNSS station positioning, along with imagery differencing, are important methods for augmenting seismic sensors. During response to recent earthquakes (1989 Loma Prieta, 1992 Landers, 1994 Northridge, 1999 Hector Mine, 2010 El Mayor - Cucapah, 2012 Brawley Swarm and 2014 South Napa earthquakes), GNSS co-seismic and post-seismic observations proved to be essential for rapid earthquake source characterization. Often, we find that GNSS results indicate key aspects of the earthquake source that would not have been known in the absence of GNSS data. Seismic, geologic, and imagery data alone, without GNSS, would miss important details of the earthquake source. That is, GNSS results provide important additional insight into the earthquake source properties, which in turn help understand the relationship between shaking and damage patterns. GNSS also adds to understanding of the distribution of slip along strike and with depth on a fault, which can help determine possible lifeline damage due to fault offset, as well as the vertical deformation and tilt that are vitally important for gravitationally driven water systems. The GNSS processing work flow that took more than one week 25 years ago now takes less than one second. Formerly, portable receivers needed to be set up at a site, operated for many hours, then data retrieved, processed and modeled by a series of manual steps. The establishment of continuously telemetered, continuously operating high-rate GNSS stations and the robust automation of all aspects of data retrieval and processing, has led to sub-second overall system latency. Within the past few years, the final challenges of standardization and adaptation to the existing framework of the ShakeAlert earthquake early warning system have been met, such that real-time GNSS processing and input to ShakeAlert is now routine and in use. Ongoing adaptation and testing of algorithms remain the last step towards fully operational incorporation of GNSS into ShakeAlert by USGS and its partners.
Choy, George; Rubinstein, Justin L.; Yeck, William; McNamara, Daniel E.; Mueller, Charles; Boyd, Oliver
2016-01-01
The largest recorded earthquake in Kansas occurred northeast of Milan on 12 November 2014 (Mw 4.9) in a region previously devoid of significant seismic activity. Applying multistation processing to data from local stations, we are able to detail the rupture process and rupture geometry of the mainshock, identify the causative fault plane, and delineate the expansion and extent of the subsequent seismic activity. The earthquake followed rapid increases of fluid injection by multiple wastewater injection wells in the vicinity of the fault. The source parameters and behavior of the Milan earthquake and foreshock–aftershock sequence are similar to characteristics of other earthquakes induced by wastewater injection into permeable formations overlying crystalline basement. This earthquake also provides an opportunity to test the empirical relation that uses felt area to estimate moment magnitude for historical earthquakes for Kansas.
NASA Astrophysics Data System (ADS)
Garagash, I. A.; Lobkovsky, L. I.; Mazova, R. Kh.
2012-04-01
The study of generation of strongest earthquakes with upper-value magnitude (near above 9) and induced by them catastrophic tsunamis, is performed by authors on the basis of new approach to the generation process, occurring in subduction zones under earthquake. The necessity of performing of such studies is connected with recent 11 March 2011 catastrophic underwater earthquake close to north-east Japan coastline and following it catastrophic tsunami which had led to vast victims and colossal damage for Japan. The essential importance in this study is determined by unexpected for all specialists the strength of earthquake occurred (determined by magnitude M = 9), inducing strongest tsunami with wave height runup on the beach up to 10 meters. The elaborated by us model of interaction of ocean lithosphere with island-arc blocks in subduction zones, with taking into account of incomplete stress discharge at realization of seismic process and further accumulation of elastic energy, permits to explain arising of strongest mega-earthquakes, such as catastrophic earthquake with source in Japan deep-sea trench in March, 2011. In our model, the wide possibility for numerical simulation of dynamical behaviour of underwater seismic source is provided by kinematical model of seismic source as well as by elaborated by authors numerical program for calculation of tsunami wave generation by dynamical and kinematical seismic sources. The method obtained permits take into account the contribution of residual tectonic stress in lithosphere plates, leading to increase of earthquake energy, which is usually not taken into account up to date.
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.
2017-06-01
Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher resolution over a larger area than either data source alone.
Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process
NASA Astrophysics Data System (ADS)
Sheng, Y.; Yin, J.; Yao, H.
2014-12-01
Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake in order to better capture the rupture characteristics (e.g., rupture area and velocity) of this earthquake.
NASA Astrophysics Data System (ADS)
Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.
2017-04-01
We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Rapid estimate of earthquake source duration: application to tsunami warning.
NASA Astrophysics Data System (ADS)
Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier
2016-04-01
We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J. and Reymond, D. (2014). New Tsunami Forecast Tools for the French Polynesia Tsunami Warning System. Pure Appl. Geophys, 171. DUPUTEL, Z., RIVERA, L., KANAMORI, H. and HAYES, G. (2012). Wphase source inversion for moderate to large earthquakes. Geophys. J. Intl.189, 1125-1147. Kanamori, H. (1972). Mechanism of tsunami earthquakes. Phys. Earth Planet. Inter. 6, 246-259. Kanamori, H. and Rivera, L. (2008). Source inversion of W phase : speeding up seismic tsunami warning. Geophys. J. Intl. 175, 222-238. Newman, A. and Okal, E. (1998). Teleseismic estimates of radiated seismic energy : The E=M0 discriminant for tsunami earthquakes. J. Geophys. Res. 103, 26885-26898. Ni, S., H. Kanamori, and D. Helmberger (2005), Energy radiation from the Sumatra earthquake, Nature, 434, 582. Okal, E.A., and H. Hébert (2007), Far-field modeling of the 1946 Aleutian tsunami, Geophys. J. Intl., 169, 1229-1238. Vallée, M., J. Charléty, A.M.G. Ferreira, B. Delouis, and J. Vergoz, SCARDEC : a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body wave deconvolution, Geophys. J. Int., 184, 338-358, 2011.
Miller, A.D.; Julian, B.R.; Foulger, G.R.
1998-01-01
The volcanic and geothermal areas of Iceland are rich sources of non-double-couple (non-DC) earthquakes. A state-of-the-art digital seismometer network deployed at the Hengill-Grensdalur volcanic complex in 1991 recorded 4000 small earthquakes. We used the best recorded of these to determine 3-D VP and VP/VS structure tomographically and accurate earthquake moment tensors. The VP field is dominated by high seismic wave speed bodies interpreted as solidified intrusions. A widespread negative (-4 per cent) VP/VS anomaly in the upper 4 km correlates with the geothermal field, but is too strong to be caused solely by the effect of temperature upon liquid water or the presence of vapour, and requires in addition mineralogical or lithological differences between the geothermal reservoir and its surroundings. These may be caused by geothermal alteration. Well-constrained moment tensors were obtained for 70 of the best-recorded events by applying linear programming methods to P- and S-wave polarities and amplitude ratios. About 25 per cent of the mechanisms are, within observational error, consistent with DC mechanisms consistent with shear faulting. The other 75 per cent have significantly non-DC mechanisms. Many have substantial explosive components, one has a substantial implosive component, and the deviatoric component of many is strongly non-DC. Many of the non-DC mechanisms are consistent, within observational error, with simultaneous tensile and shear faulting. However, the mechanisms occupy a continuum in source-type parameter space and probably at least one additional source process is occurring. This may be fluid flow into newly formed cracks, causing partial compensation of the volumetric component. Studying non-shear earthquakes such as these has great potential for improving our understanding of geothermal processes and earthquake source processes in general.
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
Source processes of strong earthquakes in the North Tien-Shan region
NASA Astrophysics Data System (ADS)
Kulikova, G.; Krueger, F.
2013-12-01
Tien-Shan region attracts attention of scientists worldwide due to its complexity and tectonic uniqueness. A series of very strong destructive earthquakes occurred in Tien-Shan at the turn of XIX and XX centuries. Such large intraplate earthquakes are rare in seismology, which increases the interest in the Tien-Shan region. The presented study focuses on the source processes of large earthquakes in Tien-Shan. The amount of seismic data is limited for those early times. In 1889, when a major earthquake has occurred in Tien-Shan, seismic instruments were installed in very few locations in the world and these analog records did not survive till nowadays. Although around a hundred seismic stations were operating at the beginning of XIX century worldwide, it is not always possible to get high quality analog seismograms. Digitizing seismograms is a very important step in the work with analog seismic records. While working with historical seismic records one has to take into account all the aspects and uncertainties of manual digitizing and the lack of accurate timing and instrument characteristics. In this study, we develop an easy-to-handle and fast digitization program on the basis of already existing software which allows to speed up digitizing process and to account for all the recoding system uncertainties. Owing to the lack of absolute timing for the historical earthquakes (due to the absence of a universal clock at that time), we used time differences between P and S phases to relocate the earthquakes in North Tien-Shan and the body-wave amplitudes to estimate their magnitudes. Combining our results with geological data, five earthquakes in North Tien-Shan were precisely relocated. The digitizing of records can introduce steps into the seismograms which makes restitution (removal of instrument response) undesirable. To avoid the restitution, we simulated historic seismograph recordings with given values for damping and free period of the respective instrument and compared the amplitude ratios (between P, PP, S and SS) of the real data and the simulated seismograms. At first, the depth and the focal mechanism of the earthquakes were determined based on the amplitude ratios for the point source. Further, on the base of ISOLA software, we developed an application which calculates kinematic source parameters for historical earthquakes without restitution. Based on sub-events approach kinematic source parameters could be determined for a subset of the events. We present the results for five major instrumentally recorded earthquake in North Tien-Shan. The strongest one was the Chon-Kemin earthquake on 3rd January 1911. Its relocated epicenter is 42.98N and 77.33E - 80 kilometer southward from the catalog location. The depth is determined to be 28 km. The obtained focal mechanism shows strike, dip, and slip angles of 44°, 82°,and 56°, respectively. The moment magnitude is calculated to be Mw 8.1. The source time duration is 45 s which gives about 120 km rupture length.
Low-frequency source parameters of twelve large earthquakes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Harabaglia, Paolo
1993-01-01
A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.
Development of optimization-based probabilistic earthquake scenarios for the city of Tehran
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Peyghaleh, E.
2016-01-01
This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less computation power. The authors have used this approach for risk assessment towards identification of effectiveness-profitability of risk mitigation measures, using optimization model for resource allocation. Based on the error-computation trade-off, 62-earthquake scenarios are chosen to be used for this purpose.
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com; Afnimar,; Puspito, Nanang T.
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994more » Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.« less
Exploiting broadband seismograms and the mechanism of deep-focus earthquakes
NASA Astrophysics Data System (ADS)
Jiao, Wenjie
1997-09-01
Modern broadband seismic instrumentation has provided enormous opportunities to retrieve the information in almost any frequency band of seismic interest. In this thesis, we have investigated the long period responses of the broadband seismometers and the problem of recovering actual groundmotion. For the first time, we recovered the static offset for an earthquake from dynamic seismograms. The very long period waves of near- and intermediate-field term from 1994 large Bolivian deep earthquake (depth = 630km, Msb{W}=8.2) and 1997 large Argentina deep earthquake (depth = 285km, Msb{W}=7.1) are successfully recovered from the portable broadband recordings by BANJO and APVC networks. These waves provide another dynamic window into the seismic source process and may provide unique information to help constrain the source dynamics of deep earthquakes in the future. We have developed a new method to locate global explosion events based on broadband waveform stacking and simulated annealing. This method utilizes the information provided by the full broadband waveforms. Instead of "picking times", the character of the wavelet is used for locating events. The application of this methodology to a Lop Nor nuclear explosion is very successful, and suggests a procedure for automatic monitoring. We have discussed the problem of deep earthquakes from the viewpoint of rock mechanics and seismology. The rupture propagation of deep earthquakes requires a slip-weakening process unlike that for shallow events. However, this process is not necessarily the same as the process which triggers the rupture. Partial melting due to stress release is developed to account for the slip-weakening process in the deep earthquake rupture. The energy required for partial melting in this model is on the same order of the maximum energy required for the slip-weakening process in the shallow earthquake rupture. However, the verification of this model requires experimental work on the thermodynamic properties of rocks under non-hydrostatic stress. The solution of the deep earthquake problem will require an interdisciplinary study of seismology, high pressure rock mechanics, and mineralogy.
NASA Astrophysics Data System (ADS)
Zhao, Fengfan; Meng, Lingyuan
2016-04-01
The April 20, 2013 Ms 7.0, earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process with the source mechanism and empirical relationships, estimated the strong ground motion in the near-fault field based on the Brune's circle model. A dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Moreover, we discussed the characteristics of the strong ground motion in the near-fault field, that the broadband synthetic seismogram ground motion predictions for Boxing and Lushan city produced larger peak values, shorter durations and higher frequency contents. It indicates that the factors in near-fault strong ground motion was under the influence of higher effect stress drop and asperity slip distributions on the fault plane. This work is financially supported by the Natural Science Foundation of China (Grant No. 41404045) and by Science for Earthquake Resilience of CEA (XH14055Y).
NASA Astrophysics Data System (ADS)
Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo
2018-03-01
On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Seismic Sources for the Territory of Georgia
NASA Astrophysics Data System (ADS)
Tsereteli, N. S.; Varazanashvili, O.
2011-12-01
The southern Caucasus is an earthquake prone region where devastating earthquakes have repeatedly caused significant loss of lives, infrastructure and buildings. High geodynamic activity of the region expressed in both seismic and aseismic deformations, is conditioned by the still-ongoing convergence of lithospheric plates and northward propagation of the Afro-Arabian continental block at a rate of several cm/year. The geometry of tectonic deformations in the region is largely determined by the wedge-shaped rigid Arabian block intensively intended into the relatively mobile Middle East-Caucasian region. Georgia is partner of ongoing regional project EMME. The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment. In this approach the first parameter requirement is the definition of seismic source zones. Seismic sources can be either faults or area sources. Seismoactive structures of Georgia are identified mainly on the basis of the correlation between neotectonic structures of the region and earthquakes. Requirements of modern PSH software to geometry of faults is very high. As our knowledge of active faults geometry is not sufficient, area sources were used. Seismic sources are defined as zones that are characterized with more or less uniform seismicity. Poor knowledge of the processes occurring in deep of the Earth is connected with complexity of direct measurement. From this point of view the reliable data obtained from earthquake fault plane solution is unique for understanding the character of a current tectonic life of investigated area. There are two methods of identification if seismic sources. The first is the seimsotectonic approach, based on identification of extensive homogeneous seismic sources (SS) with the definition of probability of occurrence of maximum earthquake Mmax. In the second method the identification of seismic sources will be obtained on the bases of structural geology, parameters of seismicity and seismotectonics. This last approach was used by us. For achievement of this purpose it was necessary to solve following problems: to calculate the parameters of seismotectonic deformation; to reveal regularities in character of earthquake fault plane solution; use obtained regularities to develop principles of an establishment of borders between various hierarchical and scale levels of seismic deformations fields and to give their geological interpretation; Three dimensional matching of active faults with real geometrical dimension and earthquake sources have been investigated. Finally each zone have been defined with the parameters: the geometry, the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes
Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake
Hayes, Gavin P.
2011-01-01
On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.
2015-07-01
This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.
Generalized interferometry - I: theory for interstation correlations
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; Stehly, Laurent; Ermert, Laura; Boehm, Christian
2017-02-01
We develop a general theory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on Earth structure. Not making any restrictive assumptions on the nature of the wavefield sources, our theory can be applied to earthquake and ambient noise data, either separately or combined. This allows us (i) to locate earthquakes using interstation correlations and without knowledge of the origin time, (ii) to unify the earthquake-based two-station method and noise correlations without the need to exclude either of the two data types, and (iii) to eliminate the requirement to remove earthquake signals from noise recordings prior to the computation of correlation functions. In addition to the basic theory for acoustic wavefields, we present numerical examples for 2-D media, an extension to the most general viscoelastic case, and a method for the design of optimal processing schemes that eliminate the forward modelling error completely. This work is intended to provide a comprehensive theoretical foundation of full-waveform interferometry by correlation, and to suggest improvements to current passive monitoring methods.
Change-point detection of induced and natural seismicity
NASA Astrophysics Data System (ADS)
Fiedler, B.; Holschneider, M.; Zoeller, G.; Hainzl, S.
2016-12-01
Earthquake rates are influenced by tectonic stress buildup, earthquake-induced stress changes, and transient aseismic sources. While the first two sources can be well modeled due to the fact that the source is known, transient aseismic processes are more difficult to detect. However, the detection of the associated changes of the earthquake activity is of great interest, because it might help to identify natural aseismic deformation patterns (such as slow slip events) and the occurrence of induced seismicity related to human activities. We develop a Bayesian approach to detect change-points in seismicity data which are modeled by Poisson processes. By means of a Likelihood-Ratio-Test, we proof the significance of the change of the intensity. The model is also extended to spatiotemporal data to detect the area of the transient changes. The method is firstly tested for synthetic data and then applied to observational data from central US and the Bardarbunga volcano in Iceland.
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.
Near-field observations of microearthquake source physics using dense array
NASA Astrophysics Data System (ADS)
Chen, X.; Nakata, N.; Abercrombie, R. E.
2017-12-01
The recorded waveform includes contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of near-site attenuation effects. Fortunately, this problem can be remedied by dense near-field recordings at high frequency, and large databases with wide magnitude range. The 2016 IRIS wavefield experiment provides high-quality recordings of earthquake sequences in north-central Oklahoma with about 400 sensors in 15 km area. Preliminary processing of the IRIS wavefield array resulted with about 20,000 microearthquakes ranging from M-1 to M2, while only 2 earthquakes are listed in the catalog during the same time period. A preliminary examination of the catalog reveals three similar magnitude earthquakes (M 2) occurred at similar locations within 9 seconds of each other. Utilizing this catalog, we will combine individual empirical Green's function (EGF) analysis and stacking over multiple EGFs to examine if there are any systematic variations of source time functions and spectral ratios across the array, which will provide constrains of rupture complexity, directivity and earthquake interactions. For example, this would help us to understand if these three earthquakes rupture overlapping fault patches from cascading failure, or from repeated rupture at the same slip patch due to external stress loading. Deciphering the interaction at smaller scales with near-field observations is important for a controlled earthquake experiment.
The August 2011 Virginia and Colorado Earthquake Sequences: Does Stress Drop Depend on Strain Rate?
NASA Astrophysics Data System (ADS)
Abercrombie, R. E.; Viegas, G.
2011-12-01
Our preliminary analysis of the August 2011 Virginia earthquake sequence finds the earthquakes to have high stress drops, similar to those of recent earthquakes in NE USA, while those of the August 2011 Trinidad, Colorado, earthquakes are moderate - in between those typical of interplate (California) and the east coast. These earthquakes provide an unprecedented opportunity to study such source differences in detail, and hence improve our estimates of seismic hazard. Previously, the lack of well-recorded earthquakes in the eastern USA severely limited our resolution of the source processes and hence the expected ground accelerations. Our preliminary findings are consistent with the idea that earthquake faults strengthen during longer recurrence times and intraplate faults fail at higher stress (and produce higher ground accelerations) than their interplate counterparts. We use the empirical Green's function (EGF) method to calculate source parameters for the Virginia mainshock and three larger aftershocks, and for the Trinidad mainshock and two larger foreshocks using IRIS-available stations. We select time windows around the direct P and S waves at the closest stations and calculate spectral ratios and source time functions using the multi-taper spectral approach (eg. Viegas et al., JGR 2010). Our preliminary results show that the Virginia sequence has high stress drops (~100-200 MPa, using Madariaga (1976) model), and the Colorado sequence has moderate stress drops (~20 MPa). These numbers are consistent with previous work in the regions, for example the Au Sable Forks (2002) earthquake, and the 2010 Germantown (MD) earthquake. We also calculate the radiated seismic energy and find the energy/moment ratio to be high for the Virginia earthquakes, and moderate for the Colorado sequence. We observe no evidence of a breakdown in constant stress drop scaling in this limited number of earthquakes. We extend our analysis to a larger number of earthquakes and stations. We calculate uncertainties in all our measurements, and also consider carefully the effects of variation in available bandwidth in order to improve our constraints on the source parameters.
Automated Determination of Magnitude and Source Length of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.
2017-12-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Automated Determination of Magnitude and Source Extent of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, Dun
2017-04-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources
NASA Astrophysics Data System (ADS)
Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.
2017-09-01
We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction. We have developed a methodology for synthesizing physics-based broadband ground motion that incorporates the effects of realistic earthquake rupture along specific faults and the actual geology between the source and site.
NASA Astrophysics Data System (ADS)
Neely, J. S.; Huang, Y.; Furlong, K.
2017-12-01
Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in determining potential earthquake size.
Methods for monitoring hydroacoustic events using direct and reflected T waves in the Indian Ocean
NASA Astrophysics Data System (ADS)
Hanson, Jeffrey A.; Bowman, J. Roger
2006-02-01
The recent installation of permanent, three-element hydrophone arrays in the Indian Ocean offshore Diego Garcia and Cape Leeuwin, Australia, provides an opportunity to study hydroacoustic sources in more detail than previously possible. We developed and applied methods for coherent processing of the array data, for automated association of signals detected at more than one array, and for source location using only direct arrivals and using signals reflected from coastlines and other bathymetric features. During the 286-day study, 4725 hydroacoustic events were defined and located in the Indian and Southern oceans. Events fall into two classes: tectonic earthquakes and ice-related noise. The tectonic earthquakes consist of mid-ocean ridge, trench, and intraplate earthquakes. Mid-ocean ridge earthquakes are the most common tectonic events and often occur in clusters along transform offsets. Hydroacoustic signal levels for earthquakes in a standard catalog suggest that the hydroacoustic processing threshold for ridge events is one magnitude below the seismic network. Fewer earthquakes are observed along the Java Trench than expected because the large bathymetric relief of the source region complicates coupling between seismic and hydroacoustic signals, leading to divergent signal characteristics at different stations. We located 1843 events along the Antarctic coast resulting from various ice noises, most likely thermal fracturing and ice ridge forming events. Reflectors of signals from earthquakes are observed along coastlines, the mid-Indian Ocean and Ninety East ridges, and other bathymetric features. Reflected signals are used as synthetic stations to reduce location uncertainty and to enable event location with a single station.
Estimating Source Duration for Moderate and Large Earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei
2017-04-01
Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).
NASA Astrophysics Data System (ADS)
The past 2 decades have seen substantial progress in our understanding of the nature of the earthquake faulting process, but increasingly, the subject has become an interdisciplinary one. Thus, although the observation of radiated seismic waves remains the primary tool for studying earthquakes (and has been increasingly focused on extracting the physical processes occurring in the “source”), geological studies have also begun to play a more important role in understanding the faulting process. Additionally, defining the physical underpinning for these phenomena has come to be an important subject in experimental and theoretical rock mechanics.In recognition of this, a Maurice Ewing Symposium was held at Arden House, Harriman, N.Y. (the former home of the great American statesman Averill Harriman), May 20-23, 1985. The purpose of the meeting was to bring together the international community of experimentalists, theoreticians, and observationalists who are engaged in the study of various aspects of earthquake source mechanics. The conference was attended by more than 60 scientists from nine countries (France, Italy, Japan, Poland, China, the United Kingdom, United States, Soviet Union, and the Federal Republic of Germany).
Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms
NASA Astrophysics Data System (ADS)
Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng
2013-04-01
Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.
Multi-Sensor Data Fusion Project
2000-02-28
seismic network by detecting T phases generated by underground events ( generally earthquakes ) and associating these phases to seismic events. The...between underwater explosions (H), underground sources, mostly earthquake - generated (7), and noise detections (N). The phases classified as H are the only...processing for infrasound sensors is most similar to seismic array processing with the exception that the detections are based on a more sophisticated
Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform
NASA Astrophysics Data System (ADS)
Wang, Y.; Ni, S.; Chen, W.
2012-12-01
Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.
NASA Astrophysics Data System (ADS)
Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi
2018-03-01
Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.
NASA Astrophysics Data System (ADS)
Farge, G.; Shapiro, N.; Frank, W.; Mercury, N.; Vilotte, J. P.
2017-12-01
Low frequency earthquakes (LFE) are detected in association with volcanic and tectonic tremor signals as impulsive, repeated, low frequency (1-5 Hz) events originating from localized sources. While the mechanism causing this depletion of the high frequency content of their signal is still unknown, this feature may indicate that the source processes at the origin of LFE are different from those for regular earthquakes. Tectonic LFE are often associated with slip instabilities in the brittle-ductile transition zones of active faults and volcanic LFE with fluid transport in magmatic and hydrothermal systems. Key constraints on the LFE-generating physical mechanisms can be obtained by establishing scaling laws between their sizes and durations. We apply a simple spectral analysis method to the S-waveforms of each LFE to retrieve its seismic moment and corner frequency. The former characterizes the earthquake's size while the latter is inversely proportional to its duration. First, we analyze a selection of tectonic LFE from the Mexican "Sweet Spot" (Guerrero, Mexico). We find characteristic values of M ˜ 1013 N.m (Mw ˜ 2.6) and fc ˜ 2 Hz. The moment-corner frequency distribution compared to values reported in previous studies in tectonic contexts is consistent with the scaling law suggested by Bostock et al. (2015): fc ˜ M-1/10 . We then apply the same source- parameters determination method to deep volcanic LFE detected in the Klyuchevskoy volcanic group in Kamtchatka, Russia. While the seismic moments for these earthquakes are slightly smaller, they still approximately follow the fc ˜ M-1/10 scaling. This size-duration scaling observed for LFE is very different from the one established for regular earthquakes (fc ˜ M-1/3) and from the scaling more recently suggested by Ide et al. (2007) for the broad class of "slow earthquakes". The scaling observed for LFE suggests that they are generated by sources of nearly constant size with strongly varying intensities. LFE then do not exhibit the self-similarity characteristic of regular earthquakes, strongly suggesting that the physical mechanisms at their origin are different. Moreover, the agreement with the size-duration scaling for both tectonic and volcanic LFE might indicate a similarity in their source behavior.
A new source process for evolving repetitious earthquakes at Ngauruhoe volcano, New Zealand
NASA Astrophysics Data System (ADS)
Jolly, A. D.; Neuberg, J.; Jousset, P.; Sherburn, S.
2012-02-01
Since early 2005, Ngauruhoe volcano has produced repeating low-frequency earthquakes with evolving waveforms and spectral features which become progressively enriched in higher frequency energy during the period 2005 to 2009, with the trend reversing after that time. The earthquakes also show a seasonal cycle since January 2006, with peak numbers of events occurring in the spring and summer period and lower numbers of events at other times. We explain these patterns by the excitation of a shallow two-phase water/gas or water/steam cavity having temporal variations in volume fraction of bubbles. Such variations in two-phase systems are known to produce a large range of acoustic velocities (2-300 m/s) and corresponding changes in impedance contrast. We suggest that an increasing bubble volume fraction is caused by progressive heating of melt water in the resonant cavity system which, in turn, promotes the scattering excitation of higher frequencies, explaining both spectral shift and seasonal dependence. We have conducted a constrained waveform inversion and grid search for moment, position and source geometry for the onset of two example earthquakes occurring 17 and 19 January 2008, a time when events showed a frequency enrichment episode occurring over a period of a few days. The inversion and associated error analysis, in conjunction with an earthquake phase analysis show that the two earthquakes represent an excitation of a single source position and geometry. The observed spectral changes from a stationary earthquake source and geometry suggest that an evolution in both near source resonance and scattering is occurring over periods from days to months.
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
Rupture, waves and earthquakes.
Uenishi, Koji
2017-01-01
Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.
Rupture, waves and earthquakes
UENISHI, Koji
2017-01-01
Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but “extraordinary” phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable. PMID:28077808
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
NASA Astrophysics Data System (ADS)
Meng, L.; Shi, B.
2011-12-01
The New Zealand Earthquake of February 21, 2011, Mw 6.1 occurred in the South Island, New Zealand with the epicenter at longitude 172.70°E and latitude 43.58°S, and with depth of 5 km. The Mw 6.1 earthquake occurred on an unknown blind fault involving oblique-thrust faulting, which is 9 km away from southern of the Christchurch, the third largest city of New Zealand, with a striking direction from east toward west (United State Geology Survey, USGS, 2011). The earthquake killed at least 163 people and caused a lot of construction damages in Christchurch city. The Peak Ground Acceleration (PGA) observed at station Heathcote Valley Primary School (HVSC), which is 1 km away from the epicenter, is up to almost 2.0g. The ground-motion observation suggests that the buried earthquake source generates much higher near-fault ground motion. In this study, we have analyzed the earthquake source spectral parameters based on the strong motion observations, and estimated the near-fault ground motion based on the Brune's circular fault model. The results indicate that the larger ground motion may be caused by a higher dynamic stress drop,Δσd , or effect stress drop named by Brune, in the major source rupture region. In addition, a dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties from the kinematic point of view. For comparison purpose, we also conducted the broadband ground motion predictions for the station of HVSC; the synthetic seismogram of time histories produced for this station has good agreement with the observations in the waveforms, peak values and frequency contents, which clearly indicate that the higher dynamic stress drop during the fault rupture may play an important role to the anomalous ground-motion amplification. The preliminary simulated result illustrated in at Station HVSC is that the synthetics seismograms have a realistic appearance in the waveform and time duration to the observations, especially for the vertical component. Synthetics Fourier spectra are reasonably similar to the recordings. The simulated PGA values of vertical and S26W components are consistent with the recorded, and for the S64E component, the PGA derived from our simulation is smaller than that from observation. The resultant Fourier spectra both for the synthetic and observation is much similar with each other for three components of acceleration time histories, except for the vertical component, where the derived spectra from synthetic data is smaller than that resultant from observation when the frequency is above 10 Hz. Both theoretical study and numerical simulation indicate that, for the 2011 Mw 6.1, New Zealand Earthquake, the higher dynamic stress drop during the source rupture process could play an important role to the anomalous ground-motion amplification beside to the other site-related seismic effects. The composite source modeling based on the simple Brune's pulse model could approximately provide us a good insight into earthquake source related rupture processes for a moderate-sized earthquake.
Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise
NASA Astrophysics Data System (ADS)
Scott, Chelsea Phipps
Spatially dense maps of coseismic deformation derived from Interferometric Synthetic Aperture Radar (InSAR) datasets result in valuable constraints on earthquake processes. The recent increase in the quantity of observations of coseismic deformation facilitates the examination of signals in many tectonic environments associated with earthquakes of varying magnitude. Efforts to place robust constraints on the evolution of the crustal stress field following great earthquakes often rely on knowledge of the earthquake location, the fault geometry, and the distribution of slip along the fault plane. Well-characterized uncertainties and biases strengthen the quality of inferred earthquake source parameters, particularly when the associated ground displacement signals are near the detection limit. Well-preserved geomorphic records of earthquakes offer additional insight into the mechanical behavior of the shallow crust and the kinematics of plate boundary systems. Together, geodetic and geologic observations of crustal deformation offer insight into the processes that drive seismic cycle deformation over a range of timescales. In this thesis, I examine several challenges associated with the inversion of earthquake source parameters from SAR data. Variations in atmospheric humidity, temperature, and pressure at the timing of SAR acquisitions result in spatially correlated phase delays that are challenging to distinguish from signals of real ground deformation. I characterize the impact of atmospheric noise on inferred earthquake source parameters following elevation-dependent atmospheric corrections. I analyze the spatial and temporal variations in the statistics of atmospheric noise from both reanalysis weather models and InSAR data itself. Using statistics that reflect the spatial heterogeneity of atmospheric characteristics, I examine parameter errors for several synthetic cases of fault slip on a basin-bounding normal fault. I show a decrease in uncertainty in fault geometry and kinematics following the application of atmospheric corrections to an event spanned by real InSAR data, the 1992 M5.6 Little Skull Mountain, Nevada, earthquake. Finally, I discuss how the derived workflow could be applied to other tectonic problems, such as solving for interseismic strain accumulation rates in a subduction zone environment. I also study the evolution of the crustal stress field in the South American plate following two recent great earthquakes along the Nazca- South America subduction zone. I show that the 2010 Mw 8.8 Maule, Chile, earthquake very likely triggered several moderate magnitude earthquakes in the Andean volcanic arc and backarc. This suggests that great earthquakes modulate the crustal stress field outside of the immediate aftershock zone and that far-field faults may pose a heightened hazard following large subduction earthquakes. The 2014 Mw 8.1 Pisagua, Chile, earthquake reopened ancient surface cracks that have been preserved in the hyperarid forearc setting of northern Chile for thousands of earthquake cycles. The orientation of cracks reopened in this event reflects the static and likely dynamic stresses generated by the recent earthquake. Coseismic cracks serve as a reliable marker of permanent earthquake deformation and plate boundary behavior persistent over the million-year timescale. This work on great earthquakes suggests that InSAR observations can play a crucial role in furthering our understanding of the crustal mechanics that drive seismic cycle processes in subduction zones.
On the scale dependence of earthquake stress drop
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Tinti, Elisa; Cirella, Antonella
2016-10-01
We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.
Determination of source process and the tsunami simulation of the 2013 Santa Cruz earthquake
NASA Astrophysics Data System (ADS)
Park, S. C.; Lee, J. W.; Park, E.; Kim, S.
2014-12-01
In order to understand the characteristics of large tsunamigenic earthquakes, we analyzed the earthquake source process of the 2013 Santa Cruz earthquake and simulated the following tsunami. We first estimated the fault length of about 200 km using 3-day aftershock distribution and the source duration of about 110 seconds using the duration of high-frequency energy radiation (Hara, 2007). Moment magnitude was estimated to be 8.0 using the formula of Hara (2007). From the results of 200 km of fault length and 110 seconds of source duration, we used the initial value of rupture velocity as 1.8 km/s for teleseismic waveform inversions. Teleseismic body wave inversion was carried out using the inversion package by Kikuchi and Kanamori (1991). Teleseismic P waveform data from 14 stations were used and band-pass filter of 0.005 ~ 1 Hz was applied. Our best-fit solution indicated that the earthquake occurred on the northwesterly striking (strike = 305) and shallowly dipping (dip = 13) fault plane. Focal depth was determined to be 23 km indicating shallow event. Moment magnitude of 7.8 was obtained showing somewhat smaller than the result obtained above and that of previous study (Lay et al., 2013). Large slip area was seen around the hypocenter. Using the slip distribution obtained by teleseismic waveform inversion, we calculated the surface deformations using formulas of Okada (1985) assuming as the initial change of sea water by tsunami. Then tsunami simulation was carred out using Conell Multi-grid Coupled Tsunami Model (COMCOT) code and 1 min-grid topographic data for water depth from the General Bathymetric Chart of the Ocenas (GEBCO). According to the tsunami simulation, most of tsunami waves propagated to the directions of southwest and northeast which are perpendicular to the fault strike. DART buoy data were used to verify our simulation. In the presentation, we will discuss more details on the results of source process and tsunami simulation and compare them with the previous study.
Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan
NASA Astrophysics Data System (ADS)
Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.
2017-12-01
An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding the physical mechanism of the 2015 Torishima earthquake. First, the estimated large uplift within Smith Caldera implies the earthquake may be related to some volcanic activity of the caldera. Secondly, the modeled ring of subsidence surrounding the caldera suggests that the process may have included notable subsidence, at least on the northeastern side out of the caldera.
NASA Astrophysics Data System (ADS)
Bydlon, S. A.; Dunham, E. M.
2016-12-01
Recent increases in seismic activity in historically quiescent areas such as Oklahoma, Texas, and Arkansas, including large, potentially induced events such as the 2011 Mw 5.6 Prague, OK, earthquake, have spurred the need for investigation into expected ground motions associated with these seismic sources. The neoteric nature of this seismicity increase corresponds to a scarcity of ground motion recordings within 50 km of earthquakes Mw 3.0 and greater, with increasing scarcity at larger magnitudes. Gathering additional near-source ground motion data will help better constraints on regional ground motion prediction equations (GMPEs) and will happen over time, but this leaves open the possibility of damaging earthquakes occurring before potential ground shaking and seismic hazard in these areas are properly understood. To aid the effort of constraining near-source GMPEs associated with induced seismicity, we integrate synthetic ground motion data from simulated earthquakes into the process. Using the dynamic rupture and seismic wave propagation code waveqlab3d, we perform verification and validation exercises intended to establish confidence in simulated ground motions for use in constraining GMPEs. We verify the accuracy of our ground motion simulator by performing the PEER/SCEC layer-over-halfspace comparison problem LOH.1 Validation exercises to ensure that we are synthesizing realistic ground motion data include comparisons to recorded ground motions for specific earthquakes in target areas of Oklahoma between Mw 3.0 and 4.0. Using a 3D velocity structure that includes a 1D structure with additional small-scale heterogeneity, the properties of which are based on well-log data from Oklahoma, we perform ground motion simulations of small (Mw 3.0 - 4.0) earthquakes using point moment tensor sources. We use the resulting synthetic ground motion data to develop GMPEs for small earthquakes in Oklahoma. Preliminary results indicate that ground motions can be amplified if the source is located in the shallow, sedimentary sequence compared to the basement. Source depth could therefore be an important variable to define explicitly in GMPEs instead of being incorporated into traditional distance metrics. Future work will include the addition of dynamic sources to develop GMPEs for large earthquakes.
NASA Astrophysics Data System (ADS)
Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.
Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.
NASA Astrophysics Data System (ADS)
Meng, L.; Zang, Y.; Zhou, L.; Han, Y.
2017-12-01
The MW7.8 New Zealand earthquake of 2016 occurred near the Kaikoura area in the South Island, New Zealand with the epicenter of 173.13°E and 42.78°S. The MW7.8 Kaikoura earthquake occurred on the transform boundary faults between the Pacific plate and the Australian plate and with the thrust focal mechanism solution. The Kaikoura earthquake is a complex event because the significant difference, especially between the magnitude, seismic moment, radiated energy and the casualties. Only two people were killed, and twenty people injured and no more than twenty buildings are destroyed during this earthquake, the damage level is not so severe in consideration about the huge magnitude. We analyzed the rupture process according to the source parameters, it can be confirmed that the radiated energy and the apparent stress of the Kaikoura earthquake are small and minor. The results indicate a frictional overshoot behavior in the dynamic source process of Kaikoura earthquake, which is actually with sufficient rupture and more affluent moderate aftershocks. It is also found that the observed horizontal Peak Ground Acceleration of the strong ground motion is generally small comparing with the Next Generation Attenuation relationship. We further studied the characteristics of the observed horizontal PGAs at the 6 near fault stations, which are located in the area less than 10 km to the main fault. The relatively high level strong ground motion from the 6 stations may be produced by the higher slip around the asperity area rather than the initial rupture position on the main plane. Actually, the huge surface displacement at the northern of the rupture fault plane indicated why aftershocks are concentrated in the north. And there are more damage in Wellington than in Christchurch, even which is near the south of the epicenter. In conclusion, the less damage level of Kaikoura earthquake in New Zealand may probably because of the smaller strong ground motion and the rare population in the near fault area, with the most severe surface destruction. This work is supported by the Natural Science Foundation of China (No. 41404045).
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
NASA Astrophysics Data System (ADS)
Munafo, I.; Malagnini, L.; Tinti, E.; Chiaraluce, L.; Di Stefano, R.; Valoroso, L.
2014-12-01
The Alto Tiberina Fault (ATF) is a 60 km long east-dipping low-angle normal fault, located in a sector of the Northern Apennines (Italy) undergoing active extension since the Quaternary. The ATF has been imaged by analyzing the active source seismic reflection profiles, and the instrumentally recorded persistent background seismicity. The present study is an attempt to separate the contributions of source, site, and crustal attenuation, in order to focus on the mechanics of the seismic sources on the ATF, as well on the synthetic and the antithetic structures within the ATF hanging-wall (i.e. Colfiorito fault, Gubbio fault and Umbria Valley fault). In order to compute source spectra, we perform a set of regressions over the seismograms of 2000 small earthquakes (-0.8 < ML< 4) recorded between 2010 and 2014 at 50 permanent seismic stations deployed in the framework of the Alto Tiberina Near Fault Observatory project (TABOO) and equipped with three-components seismometers, three of which located in shallow boreholes. Because we deal with some very small earthquakes, we maximize the signal to noise ratio (SNR) with a technique based on the analysis of peak values of bandpass-filtered time histories, in addition to the same processing performed on Fourier amplitudes. We rely on a tool called Random Vibration Theory (RVT) to completely switch from peak values in the time domain to Fourier spectral amplitudes. Low-frequency spectral plateau of the source terms are used to compute moment magnitudes (Mw) of all the events, whereas a source spectral ratio technique is used to estimate the corner frequencies (Brune spectral model) of a subset of events chosen over the analysis of the noise affecting the spectral ratios. So far, the described approach provides high accuracy over the spectral parameters of earthquakes of localized seismicity, and may be used to gain insights into the underlying mechanics of faulting and the earthquake processes.
Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling
NASA Astrophysics Data System (ADS)
Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea
2017-12-01
The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.
Non-double-couple earthquakes. 1. Theory
Julian, B.R.; Miller, A.D.; Foulger, G.R.
1998-01-01
Historically, most quantitative seismological analyses have been based on the assumption that earthquakes are caused by shear faulting, for which the equivalent force system in an isotropic medium is a pair of force couples with no net torque (a 'double couple,' or DC). Observations of increasing quality and coverage, however, now resolve departures from the DC model for many earthquakes and find some earthquakes, especially in volcanic and geothermal areas, that have strongly non-DC mechanisms. Understanding non-DC earthquakes is important both for studying the process of faulting in detail and for identifying nonshear-faulting processes that apparently occur in some earthquakes. This paper summarizes the theory of 'moment tensor' expansions of equivalent-force systems and analyzes many possible physical non-DC earthquake processes. Contrary to long-standing assumption, sources within the Earth can sometimes have net force and torque components, described by first-rank and asymmetric second-rank moment tensors, which must be included in analyses of landslides and some volcanic phenomena. Non-DC processes that lead to conventional (symmetric second-rank) moment tensors include geometrically complex shear faulting, tensile faulting, shear faulting in an anisotropic medium, shear faulting in a heterogeneous region (e.g., near an interface), and polymorphic phase transformations. Undoubtedly, many non-DC earthquake processes remain to be discovered. Progress will be facilitated by experimental studies that use wave amplitudes, amplitude ratios, and complete waveforms in addition to wave polarities and thus avoid arbitrary assumptions such as the absence of volume changes or the temporal similarity of different moment tensor components.
NASA Astrophysics Data System (ADS)
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
Scaling Relations of Earthquakes on Inland Active Mega-Fault Systems
NASA Astrophysics Data System (ADS)
Murotani, S.; Matsushima, S.; Azuma, T.; Irikura, K.; Kitagawa, S.
2010-12-01
Since 2005, The Headquarters for Earthquake Research Promotion (HERP) has been publishing 'National Seismic Hazard Maps for Japan' to provide useful information for disaster prevention countermeasures for the country and local public agencies, as well as promote public awareness of disaster prevention of earthquakes. In the course of making the year 2009 version of the map, which is the commemorate of the tenth anniversary of the settlement of the Comprehensive Basic Policy, the methods to evaluate magnitude of earthquakes, to predict strong ground motion, and to construct underground structure were investigated in the Earthquake Research Committee and its subcommittees. In order to predict the magnitude of earthquakes occurring on mega-fault systems, we examined the scaling relations for mega-fault systems using 11 earthquakes of which source processes were analyzed by waveform inversion and of which surface information was investigated. As a result, we found that the data fit in between the scaling relations of seismic moment and rupture area by Somerville et al. (1999) and Irikura and Miyake (2001). We also found that maximum displacement of surface rupture is two to three times larger than the average slip on the seismic fault and surface fault length is equal to length of the source fault. Furthermore, compiled data of the source fault shows that displacement saturates at 10m when fault length(L) is beyond 100km, L>100km. By assuming the fault width (W) to be 18km in average of inland earthquakes in Japan, and the displacement saturate at 10m for length of more than 100 km, we derived a new scaling relation between source area and seismic moment, S[km^2] = 1.0 x 10^-17 M0 [Nm] for mega-fault systems that seismic moment (M0) exceeds 1.8×10^20 Nm.
Source spectral properties of small-to-moderate earthquakes in southern Kansas
Trugman, Daniel T.; Dougherty, Sara L.; Cochran, Elizabeth S.; Shearer, Peter M.
2017-01-01
The source spectral properties of injection-induced earthquakes give insight into their nucleation, rupture processes, and influence on ground motion. Here we apply a spectral decomposition approach to analyze P-wave spectra and estimate Brune-type stress drop for more than 2000 ML1.5–5.2 earthquakes occurring in southern Kansas from 2014 to 2016. We find that these earthquakes are characterized by low stress drop values (median ∼0.4MPa) compared to natural seismicity in California. We observe a significant increase in stress drop as a function of depth, but the shallow depth distribution of these events is not by itself sufficient to explain their lower stress drop. Stress drop increases with magnitude from M1.5–M3.5, but this scaling trend may weaken above M4 and also depends on the assumed source model. Although we observe a nonstationary, sequence-specific temporal evolution in stress drop, we find no clear systematic relation with the activity of nearby injection wells.
Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view
NASA Astrophysics Data System (ADS)
Locati, Mario; Rovida, Andrea; Albini, Paola
2014-05-01
Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.
NASA Astrophysics Data System (ADS)
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
Impact of Earthquake Preperation Process On Hydrodeformation Field Evolution In The Caucasus
NASA Astrophysics Data System (ADS)
Melikadze, G.; Aliev, A.; Bendukidze, G.; Biagi, P. F.; Garalov, B.; Mirianashvili, V.
The paper studies relation between geodeformation regime variations of underground water observed in boreholes and deformation processes in the Earth crust, asso- ciated with formation of earthquakes with M=3 and higher. Monitoring of hydro- geodeformation field (HGDF) has been carried out thanks to the on-purpose gen- eral network of Armenia, Azerbaijan, Georgia and Russia. The wells are uniformly distributed throughout the Caucasus and cover all principal geological blocks of the region. The paper deals with results associated with several earthquakes occured in Georgia and one in Azerbaijan. As the network comprises boreholes of different depths, varying from 250 m down to 3,500 m, preliminary calibration of the boreholes involved was carried out, based on evaluation of the water level variation due to known Earth tide effect. This was necessary for sensitivity evaluation and normalization of hydro-dynamic signals. Obtained data have been processed by means of spectral anal- ysis to dissect background field of disturbances from the valid signal. The processed data covered the period of 1991-1993 comprising the following 4 strong earthquakes of the Caucasus, namely in: Racha (1991, M=6.9), Java (1991, M=6.2), Barisakho (1992, M=6.5) and Talish (1993, M=5.6). Formation of the compression zone in the east Caucasus and that of extension in the western Georgia and north Caucasus was observed 7 months prior to Racha quake. Boundary between the above 2 zones passed along the known submeridional fault. The area where maximal gradient was observed, coincided with the joint of deep faults and appeared to be the place for origination of the earthquake. After the quake occurred, the zone of maximal gradient started to mi- grate towards East and residual deformations in HGDF have outlined source first of Java quake (on 15.06.1991), than that of Barisakho (on 23.10.1992) and Talish (on 2.10.1993) ones. Thus, HGDF indicated migration of the deformation field along the slope of the Greater Caucasus 7 months prior the earthquake. After these reassuring results we increased density of the network, that made possible to observe migration process along Achara-Trialeti fault system prior smaller earthquakes as well (Achara, on 10.02.1996, M=2.9; Guria, on 28.05.1996, M=4.3; Javakheti, 28.05.1997, M=3.3; 1 Khashmi, 28.11.97, M=5.1). Directly prior a quake, when deformation reached its crit- ical value, the natural regime characteristic in each observation point was disturbed. Period of perturbation (varying from several hours up to several days) was dependant on the magnitude of earthquake. Disturbances showed themselves within the area of maximal stress, sometimes at a distance from the source; e.g. in case of Racha earth- quake perturbations were observed first at Lisi point (on 15.03.1991) at a distance of 190 km from earthquake source, though within the area of maximal compression, and only 25 days later at Oni point (on 10.04.1991) located in the source zone. This can be considered as one of the examples of long-distance manifestation of hydrodynamic precoursers. The obtained results verify capability of the existing HGDF observation network for tracing M=3 and higher earthquake precursors in the Caucasus, and in case of modernization of the network will be capable for middle-term prediction of large earthquakes. 2
A phase coherence approach to identifying co-located earthquakes and tremor
NASA Astrophysics Data System (ADS)
Hawthorne, J. C.; Ampuero, J.-P.
2018-05-01
We present and use a phase coherence approach to identify seismic signals that have similar path effects but different source time functions: co-located earthquakes and tremor. The method used is a phase coherence-based implementation of empirical matched field processing, modified to suit tremor analysis. It works by comparing the frequency-domain phases of waveforms generated by two sources recorded at multiple stations. We first cross-correlate the records of the two sources at a single station. If the sources are co-located, this cross-correlation eliminates the phases of the Green's function. It leaves the relative phases of the source time functions, which should be the same across all stations so long as the spatial extent of the sources are small compared with the seismic wavelength. We therefore search for cross-correlation phases that are consistent across stations as an indication of co-located sources. We also introduce a method to obtain relative locations between the two sources, based on back-projection of interstation phase coherence. We apply this technique to analyse two tremor-like signals that are thought to be composed of a number of earthquakes. First, we analyse a 20 s long seismic precursor to a M 3.9 earthquake in central Alaska. The analysis locates the precursor to within 2 km of the mainshock, and it identifies several bursts of energy—potentially foreshocks or groups of foreshocks—within the precursor. Second, we examine several minutes of volcanic tremor prior to an eruption at Redoubt Volcano. We confirm that the tremor source is located close to repeating earthquakes identified earlier in the tremor sequence. The amplitude of the tremor diminishes about 30 s before the eruption, but the phase coherence results suggest that the tremor may persist at some level through this final interval.
Source Complexity of an Injection Induced Event: The 2016 Mw 5.1 Fairview, Oklahoma Earthquake
NASA Astrophysics Data System (ADS)
López-Comino, J. A.; Cesca, S.
2018-05-01
Complex rupture processes are occasionally resolved for weak earthquakes and can reveal a dominant direction of the rupture propagation and the presence and geometry of main slip patches. Finding and characterizing such properties could be important for understanding the nucleation and growth of induced earthquakes. One of the largest earthquakes linked to wastewater injection, the 2016 Mw 5.1 Fairview, Oklahoma earthquake, is analyzed using empirical Green's function techniques to reveal its source complexity. Two subevents are clearly identified and located using a new approach based on relative hypocenter-centroid location. The first subevent has a magnitude of Mw 5.0 and shows the main rupture propagated toward the NE, in the direction of higher pore pressure perturbations due to wastewater injection. The second subevent appears as an early aftershock with lower magnitude, Mw 4.7. It is located SW of the mainshock in a region of increased Coulomb stress, where most aftershocks relocated.
NASA Astrophysics Data System (ADS)
Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.
2004-12-01
This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.
Source complexity and the physical mechanism of the 2015 Mw 7.9 Bonin Island earthquake
NASA Astrophysics Data System (ADS)
Chen, Y.; Meng, L.; Wen, L.
2015-12-01
The 30 May 2015 Mw 7.9 Bonin Island earthquake is the largest instrument-recorded deep-focus earthquake in the Izu-Bonin arc. It occurred approximately 100 km deeper than the previous seismicity, in the region unlikely to be within the core of the subducting Izu-Bonin slab. The earthquake provides an unprecedented opportunity to understand the unexpected occurrence of such isolated deep earthquakes. Multiple source inversion of the P, SH, pP and sSH phases and a novel fully three-dimensional back-projection of P and pP phases are applied to study the coseismic source process. The subevents locations and short-period energy radiations both show a L-shape bilateral rupture propagating initially in the SW direction then in the NW direction with an average rupture speed of 2.0 km/s. The decrease of focal depth on the NW branch suggests that the rupture is consistent with a single sub-horizontal plane inferred from the GCMT solution. The multiple source inversion further indicates slight variation of the focal strikes of the sub-events with the curvature of the subducting Izu-Bonin slab. The rupture is confined within an area of 20 km x 35 km, rather compact compared with the shallow earthquake of similar magnitude. The earthquake is of high stress drop on the order of 100 MPa and a low seismic efficiency of 0.19, indicating large frictional heat dissipation. The only aftershock is 11 km to the east of the mainshock hypocenter and 3 km away from the centroid of the first sub-event. Analysis of the regional tomography and nearby seismicity suggests that the earthquake may occur at the edge/periphery of the bending slab and is unlikely to be within the "cold" metastable olivine wedge. Our results suggest the spontaneous nucleation of the thermally induced shear instability is a possible mechanism for such isolated deep earthquakes.
Non-Poissonian Distribution of Tsunami Waiting Times
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2007-12-01
Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.
Earthquake Forecasting in Northeast India using Energy Blocked Model
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, D. K.
2009-12-01
In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes. The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.
NASA Astrophysics Data System (ADS)
Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia
2017-04-01
The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.
Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2011-12-01
Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.
Seismic Shaking, Tsunami Wave Erosion And Generation of Seismo-Turbidites in the Ionian Sea
NASA Astrophysics Data System (ADS)
Polonia, Alina; Nelson, Hans; Romano, Stefania; Vaiani, Stefano Claudio; Colizza, Ester; Gasparotto, Giorgio; Gasperini, Luca
2016-04-01
We are investigating the effects of earthquakes and tsunamis on the sedimentary record in the Ionian Sea through the analysis of turbidite deposits. A comparison between radiometric dating and historical earthquake catalogs suggests that recent turbidite generation is triggered by great earthquakes in the Calabrian and hellenic Arcs such as the AD 1908 Messina, AD 1693 Catania, AD 1169 Eastern Sicily and AD 365 Crete earthquakes. Textural, micropaleontological, geochemical and mineralogical signatures of the youngest three seismo-turbidites reveal cyclic patterns of sedimentary units. The basal stacked turbidites result from multiple slope failure sources as shown by different sedimentary structures as well as mineralogic, geochemical and micropaleontological compositions. The homogenite units, are graded muds deposited from the waning flows of the multiple turbidity currents that are trapped in the Ionian Sea confined basin. The uppermost unit is divided into two parts. The lower marine sourced laminated part without textural gradation, we interpret to result from seiching of the confined water mass that appears to be generated by earthquake ruptures combined with tsunami waves. The uppermost part we interpret as the tsunamite cap that is deposited by the slow settling suspension cloud created by tsunami wave backwash erosion of the shoreline and continental shelf. This tsunami process interpretation is based on the final textural gradation of the upper unit and a more continental source of the tsunami cap which includes C/N >10, the lack of abyssal foraminifera species wirth the local occurrence of inner shelf foraminifera. Seismic reflection images show that some deeper turbidite beds are very thick and marked by acoustic transparent homogenite mud layers at their top. Based on a high resolution study of the most recent of such megabeds (Homogenite/Augias turbidite, i.e. HAT), we show that it was triggered by the AD 365 Crete earthquake. Radiometric dating support a scenario of synchronous deposition of the HAT in an area as wide as 150.000 km2, which suggests basin-scale sediment remobilization processes. The HAT in our cores is made up of a base to top sequence of stacked and graded sand/silt units with different compositions related to the Malta, Calabria and Sicilian margin locations. This composition suggests multiple synchronous slope failures typical of seismo-turbidites; however, the Crete earthquake source is too distant from the Italian margins to cause sediment failures by earthquake shaking. Consequently, because our present evidence suggests shallow-water sediment sources, we reinforce previous interpretations that the HAT is a deep-sea "tsunamite" deposit. Utilizing the expanded stratigraphy of the HAT, together with the heterogeneity of the sediment sources of the Ionian margins, we are trying to unravel the relative contribution of seismic shaking (sediment failures, MTDs, turbidity currents) and of tsunami wave processes (overwash surges, backwash flows, turbidity currents) for seismo-turbidite generation.
Investigating environmental tectonics in Northern Alpine Foreland of Europe
NASA Astrophysics Data System (ADS)
ENTEC Working Group; Cloetingh, Sierd; Ziegler, Peter; Cornu, Tristan
Until now, research on neotectonics and related seismicity has mostly focused on active plate boundaries characterized by a generally high level of earthquake activity. Current seismic hazard estimates for intraplate areas are commonly based on probabilistic analyses of historical and instrumental earthquake data. The accuracy of these hazard estimates is limited by the nature of the data (e.g., ambiguous historical sources), and by the restriction of available earthquake catalogues to time scales of only a few hundred years. Both of these are geologically insignificant and unsuitable for describing tectonic processes causing earthquakes. This is especially relevant to intraplate regions, where faults show low slip rates resulting in long average recurrence times for large earthquakes (103 to 106 yrs), such as the devastating Basel earthquake of 1356, with an estimated magnitude of 6.5.
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.
2011-12-01
Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very practical purpose of enabling earthquake early warning and greatly improving rapid finite-fault source modeling. Multiple uses of the effectively very broad-band data obtained by these stations, for operational and research purposes, are bound to occur especially because all data will be freely, openly and instantly available.
NASA Astrophysics Data System (ADS)
Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.
2018-03-01
In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.
2015-12-01
Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate the count and density of Tweets with intensity level and show the importance of processing combined data sources at the earliest time stages after earthquake happens. This method can supplement existing approaches of intensity level detection, especially in the regions with high number of Twitter users and low density of seismic networks.
NASA Astrophysics Data System (ADS)
Kurashimo, E.; Hirata, N.; Iwasaki, T.; Sakai, S.; Obara, K.; Ishiyama, T.; Sato, H.
2015-12-01
A shallow earthquake (Mw 6.2) occurred on November 22 in the northern Nagano Prefecture, central Japan. Aftershock area is located near the Kamishiro fault, which is a part of the Itoigawa-Shizuoka Tectonic Line (ISTL). ISTL is one of the major tectonic boundaries in Japan. Precise aftershock distribution and heterogeneous structure in and around the source region of this earthquake is important to constrain the process of earthquake occurrence. We conducted a high-density seismic array observation in and around source area to investigate aftershock distribution and crustal structure. One hundred sixty-three seismic stations, approximately 1 km apart, were deployed during the period from December 3, 2014 to December 21, 2014. Each seismograph consisted of a 4.5 Hz 3-component seismometer and a digital data recorder (GSX-3). Furthermore, the seismic data at 40 permanent stations were incorporated in our analysis. During the seismic array observation, the Japan Meteorological Agency located 977 earthquakes in a latitude range of 35.5°-37.1°N and a longitude range of 136.7°-139.0°E, from which we selected 500 local events distributed uniformly in the study area. To investigate the aftershock distribution and the crustal structure, the double-difference tomography method [Zhang and Thurber, 2003] was applied to the P- and S-wave arrival time data obtained from 500 local earthquakes. The relocated aftershock distribution shows a concentration on a plane dipping eastward in the vicinity of the mainshock hypocenter. The large slip region (asperity) estimated from InSAR analysis [GSI, 2014] corresponds to the low-activity region of the aftershocks. The depth section of Vp structure shows that the high Vp zone corresponds to the large slip region. These results suggest that structural heterogeneities in and around the fault plane may have controlled the rupture process of the 2014 northern Nagano Prefecture earthquake.
Seismic hazard analysis with PSHA method in four cities in Java.
NASA Astrophysics Data System (ADS)
Elistyawati, Y.; Palupi, I. R.; Suharsono
2016-11-01
In this study the tectonic earthquakes was observed through the peak ground acceleration through the PSHA method by dividing the area of the earthquake source. This study applied the earthquake data from 1965 - 2015 that has been analyzed the completeness of the data, location research was the entire Java with stressed in four large cities prone to earthquakes. The results were found to be a hazard map with a return period of 500 years, 2500 years return period, and the hazard curve were four major cities (Jakarta, Bandung, Yogyakarta, and the city of Banyuwangi). Results Java PGA hazard map 500 years had a peak ground acceleration within 0 g ≥ 0.5 g, while the return period of 2500 years had a value of 0 to ≥ 0.8 g. While, the PGA hazard curves on the city's most influential source of the earthquake was from sources such as fault Cimandiri backgroud, for the city of Bandung earthquake sources that influence the seismic source fault dent background form. In other side, the city of Yogyakarta earthquake hazard curve of the most influential was the source of the earthquake background of the Opak fault, and the most influential hazard curve of Banyuwangi earthquake was the source of Java and Sumba megatruts earthquake.
Earthquake source nucleation process in the zone of a permanently creeping deep fault
NASA Astrophysics Data System (ADS)
Lykov, V. I.; Mostryukov, A. O.
2008-10-01
The worldwide practice of earthquake prediction, whose beginning relates to the 1970s, shows that spatial manifestations of various precursors under real seismotectonic conditions are very irregular. As noted in [Kurbanov et al., 1980], zones of bending, intersection, and branching of deep faults, where conditions are favorable for increasing tangential tectonic stresses, serve as “natural amplifiers” of precursory effects. The earthquake of September 28, 2004, occurred on the Parkfield segment of the San Andreas deep fault in the area of a local bending of its plane. The fault segment about 60 km long and its vicinities are the oldest prognostic area in California. Results of observations before and after the earthquake were promptly analyzed and published in a special issue of Seismological Research Letters (2005, Vol. 76, no. 1). We have an original method enabling the monitoring of the integral rigidity of seismically active rock massifs. The integral rigidity is determined from the relative numbers of brittle and viscous failure acts during the formation of source ruptures of background earthquakes in a given massif. Fracture mechanisms are diagnosed from the steepness of the first arrival of the direct P wave. Principles underlying our method are described in [Lykov and Mostryukov, 1996, 2001, 2003]. Results of monitoring have been directly displayed at the site of the Laboratory (
A New Network-Based Approach for the Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.
2017-12-01
Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of evets.
NASA Astrophysics Data System (ADS)
Kumagai, Hiroyuki; Pulido, Nelson; Fukuyama, Eiichi; Aoi, Shin
2013-01-01
investigate source processes of the 2011 Tohoku-Oki earthquake, we utilized a source location method using high-frequency (5-10 Hz) seismic amplitudes. In this method, we assumed far-field isotropic radiation of S waves, and conducted a spatial grid search to find the best fitting source locations along the subducted slab in each successive time window. Our application of the method to the Tohoku-Oki earthquake resulted in artifact source locations at shallow depths near the trench caused by limited station coverage and noise effects. We then assumed various source node distributions along the plate, and found that the observed seismograms were most reasonably explained when assuming deep source nodes. This result suggests that the high-frequency seismic waves were radiated at deeper depths during the earthquake, a feature which is consistent with results obtained from teleseismic back-projection and strong-motion source model studies. We identified three high-frequency subevents, and compared them with the moment-rate function estimated from low-frequency seismograms. Our comparison indicated that no significant moment release occurred during the first high-frequency subevent and the largest moment-release pulse occurred almost simultaneously with the second high-frequency subevent. We speculated that the initial slow rupture propagated bilaterally from the hypocenter toward the land and trench. The landward subshear rupture propagation consisted of three successive high-frequency subevents. The trenchward propagation ruptured the strong asperity and released the largest moment near the trench.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Methodology to determine the parameters of historical earthquakes in China
NASA Astrophysics Data System (ADS)
Wang, Jian; Lin, Guoliang; Zhang, Zhe
2017-12-01
China is one of the countries with the longest cultural tradition. Meanwhile, China has been suffering very heavy earthquake disasters; so, there are abundant earthquake recordings. In this paper, we try to sketch out historical earthquake sources and research achievements in China. We will introduce some basic information about the collections of historical earthquake sources, establishing intensity scale and the editions of historical earthquake catalogues. Spatial-temporal and magnitude distributions of historical earthquake are analyzed briefly. Besides traditional methods, we also illustrate a new approach to amend the parameters of historical earthquakes or even identify candidate zones for large historical or palaeo-earthquakes. In the new method, a relationship between instrumentally recorded small earthquakes and strong historical earthquakes is built up. Abundant historical earthquake sources and the achievements of historical earthquake research in China are of valuable cultural heritage in the world.
Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.
2014-04-01
The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported for large earthquakes.
Monitoring the Earthquake source process in North America
Herrmann, Robert B.; Benz, H.; Ammon, C.J.
2011-01-01
With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.
Source processes of industrially-induced earthquakes at the Geysers geothermal area, California
Ross, A.; Foulger, G.R.; Julian, B.R.
1999-01-01
Microearthquake activity at The Geysers geothermal area, California, mirrors the steam production rate, suggesting that the earthquakes are industrially induced. A 15-station network of digital, three-component seismic stations was operated for one month in 1991, and 3,900 earthquakes were recorded. Highly-accurate moment tensors were derived for 30 of the best recorded earthquakes by tracing rays through tomographically derived 3-D VP and VP / VS structures, and inverting P-and S-wave polarities and amplitude ratios. The orientations of the P-and T-axes are very scattered, suggesting that there is no strong, systematic deviatoric stress field in the reservoir, which could explain why the earthquakes are not large. Most of the events had significant non-double-couple (non-DC) components in their source mechanisms with volumetric components up to ???30% of the total moment. Explosive and implosive sources were observed in approximately equal numbers, and must be caused by cavity creation (or expansion) and collapse. It is likely that there is a causal relationship between these processes and fluid reinjection and steam withdrawal. Compensated linear vector dipole (CLVD) components were up to 100% of the deviatoric component. Combinations of opening cracks and shear faults cannot explain all the observations, and rapid fluid flow may also be involved. The pattern of non-DC failure at The Geysers contrasts with that of the Hengill-Grensdalur area in Iceland, a largely unexploited water-dominated field in an extensional stress regime. These differences are poorly understood but may be linked to the contrasting regional stress regimes and the industrial exploitation at The Geysers.
Adjoint Inversion for Extended Earthquake Source Kinematics From Very Dense Strong Motion Data
NASA Astrophysics Data System (ADS)
Ampuero, J. P.; Somala, S.; Lapusta, N.
2010-12-01
Addressing key open questions about earthquake dynamics requires a radical improvement of the robustness and resolution of seismic observations of large earthquakes. Proposals for a new generation of earthquake observation systems include the deployment of “community seismic networks” of low-cost accelerometers in urban areas and the extraction of strong ground motions from high-rate optical images of the Earth's surface recorded by a large space telescope in geostationary orbit. Both systems could deliver strong motion data with a spatial density orders of magnitude higher than current seismic networks. In particular, a “space seismometer” could sample the seismic wave field at a spatio-temporal resolution of 100 m, 1 Hz over areas several 100 km wide with an amplitude resolution of few cm/s in ground velocity. The amount of data to process would be immensely larger than what current extended source inversion algorithms can handle, which hampers the quantitative assessment of the cost-benefit trade-offs that can guide the practical design of the proposed earthquake observation systems. We report here on the development of a scalable source imaging technique based on iterative adjoint inversion and its application to the proof-of-concept of a space seismometer. We generated synthetic ground motions for M7 earthquake rupture scenarios based on dynamic rupture simulations on a vertical strike-slip fault embedded in an elastic half-space. A range of scenarios include increasing levels of complexity and interesting features such as supershear rupture speed. The resulting ground shaking is then processed accordingly to what would be captured by an optical satellite. Based on the resulting data, we perform source inversion by an adjoint/time-reversal method. The gradient of a cost function quantifying the waveform misfit between data and synthetics is efficiently obtained by applying the time-reversed ground velocity residuals as surface force sources, back-propagating onto the locked fault plane through a seismic wave simulation and recording the fault shear stress, which is the adjoint field of the fault slip-rate. Restricting the procedure to a single iteration is known as imaging. The source reconstructed by imaging reproduces the original forward model quite well in the shallow part of the fault. However, the deeper part of the earthquake source is not well reproduced, due to the lack of data on the side and bottom boundaries of our computational domain. To resolve this issue, we are implementing the complete iterative procedure and we will report on the convergence aspects of the adjoint iterations. Our current work is also directed towards addressing the lack of data on other boundaries of our domain and improving the source reconstruction by including teleseismic data for those boundaries and non-negativity constraints on the dominant slip-rate component.
Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Kubo, H.
2013-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.
Investigation of Pre-Earthquake Ionospheric Disturbances by 3D Tomographic Analysis
NASA Astrophysics Data System (ADS)
Yagmur, M.
2016-12-01
Ionospheric variations before earthquakes have been widely discussed phenomena in ionospheric studies. To clarify the source and mechanism of these phenomena is highly important for earthquake forecasting. To well understanding the mechanical and physical processes of pre-seismic Ionospheric anomalies that might be related even with Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling, both statistical and 3D modeling analysis are needed. For these purpose, firstly we have investigated the relation between Ionospheric TEC Anomalies and potential source mechanisms such as space weather activity and lithospheric phenomena like positive surface electric charges. To distinguish their effects on Ionospheric TEC, we have focused on pre-seismically active days. Then, we analyzed the statistical data of 54 earthquakes that M≽6 between 2000 and 2013 as well as the 2011 Tohoku and the 2016 Kumamoto Earthquakes in Japan. By comparing TEC anomaly and Solar activity by Dst Index, we have found that 28 events that might be related with Earthquake activity. Following the statistical analysis, we also investigate the Lithospheric effect on TEC change on selected days. Among those days, we have chosen two case studies as the 2011 Tohoku and the 2016 Kumamoto Earthquakes to make 3D reconstructed images by utilizing 3D Tomography technique with Neural Networks. The results will be presented in our presentation. Keywords : Earthquake, 3D Ionospheric Tomography, Positive and Negative Anomaly, Geomagnetic Storm, Lithosphere
Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field.
Holtzman, Benjamin K; Paté, Arthur; Paisley, John; Waldhauser, Felix; Repetto, Douglas
2018-05-01
The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < M L < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity.
Research on Collection of Earthquake Disaster Information from the Crowd
NASA Astrophysics Data System (ADS)
Nian, Z.
2017-12-01
In China, the assessment of the earthquake disasters information is mainly based on the inversion of the seismic source mechanism and the pre-calculated population data model, the real information of the earthquake disaster is usually collected through the government departments, the accuracy and the speed need to be improved. And in a massive earthquake like the one in Mexico, the telecommunications infrastructure on ground were damaged , the quake zone was difficult to observe by satellites and aircraft in the bad weather. Only a bit of information was sent out through maritime satellite of other country. Thus, the timely and effective development of disaster relief was seriously affected. Now Chinese communication satellites have been orbiting, people don't only rely on the ground telecom base station to keep communication with the outside world, to open the web page,to land social networking sites, to release information, to transmit images and videoes. This paper will establish an earthquake information collection system which public can participate. Through popular social platform and other information sources, the public can participate in the collection of earthquake information, and supply quake zone information, including photos, video, etc.,especially those information made by unmanned aerial vehicle (uav) after earthqake, the public can use the computer, potable terminals, or mobile text message to participate in the earthquake information collection. In the system, the information will be divided into earthquake zone basic information, earthquake disaster reduction information, earthquake site information, post-disaster reconstruction information etc. and they will been processed and put into database. The quality of data is analyzed by multi-source information, and is controlled by local public opinion on them to supplement the data collected by government departments timely and implement the calibration of simulation results ,which will better guide disaster relief scheduling and post-disaster reconstruction. In the future ,we will work hard to raise public awareness, to train their consciousness of public participation and to improve the quality of public supply data.
NASA Astrophysics Data System (ADS)
Kausel, Edgar; Campos, Jaime
1992-08-01
The only known great ( Ms = 8) intermediate depth earthquake localized downdip of the main thrust zone of the Chilean subduction zone occurred landward of Antofagasta on 9 December 1950. In this paper we determine the source parameters and rupture process of this shock by modeling long-period body waves. The source mechanism corresponds to a downdip tensional intraplate event rupturing along a nearly vertical plane with a seismic moment of M0 = 1 × 10 28 dyn cm, of strike 350°, dip 88°, slip 270°, Mw = 7.9 and a stress drop of about 100 bar. The source time function consists of two subevents, the second being responsible for 70% of the total moment release. The unusually large magnitude ( Ms = 8) of this intermediate depth event suggests a rupture through the entire lithosphere. The spatial and temporal stress regime in this region is discussed. The simplest interpretation suggests that a large thrust earthquake should follow the 1950 tensional shock. Considering that the historical record of the region does not show large earthquakes, a 'slow' earthquake can be postulated as an alternative mechanism to unload the thrust zone. A weakly coupled subduction zone—within an otherwise strongly coupled region as evidenced by great earthquakes to the north and south—or the existence of creep are not consistent with the occurrence of a large tensional earthquake in the subducting lithosphere downdip of the thrust zone. The study of focal mechanisms of the outer rise earthquakes would add more information which would help us to infer the present state of stress in the thrust region.
NASA Astrophysics Data System (ADS)
Anggraini, Ade; Sobiesiak, Monika; Walter, Thomas R.
2010-05-01
The Mw 6.3 May 26, 2006 Yogyakarta Earthquake caused severe damage and claimed thousands lives in the Yogyakarta Special Province and Klaten District of Central Java Province. The nearby Opak River fault was thought to be the source of this earthquake disaster. However, no significant surface movement was observed along the fault which could confirm that this fault was really the source of the earthquake. To investigate the earthquake source and to understand the earthquake mechanism, a rapid response team of the German Task Force for Earthquake, together with the Seismological Division of Badan Meteorologi Klimatologi dan Geofisika and Gadjah Mada University in Yogyakarta, had installed a temporary seismic network of 12 short period seismometers. More than 3000 aftershocks were recorded during the 3-month campaign. Here we present the result of several hundred processed aftershocks. We used integrated software package GIANTPitsa to pick P and S phases manually and HYPO71 to determine the hypocenters. HypoDD software was used for hypocenters relocation to obtain high precision aftershock locations. Our aftershock distribution shows a system of lineaments in southwest-northeast direction, about 10 km east to Opak River fault, at 5-18 km depth. The b-value map from the aftershocks shows that the main lineaments have relatively low b-value at the middle part which suggests this part is still under stress. We also observe several aftershock clusters cutting these lineaments in nearly perpendicular direction. To verify the interpretation of our aftershocks analysis, we will overlay it on surface feature we delineate from satellite data. Hopefully our result will give significant contribution to understand the near surface fault systems around Yogyakarta Area in order to mitigate similar earthquake hazard in the future.
Hartzell, S.; Iida, M.
1990-01-01
Strong motion records for the Whittier Narrows earthquake are inverted to obtain the history of slip. Both constant rupture velocity models and variable rupture velocity models are considered. The results show a complex rupture process within a relatively small source volume, with at least four separate concentrations of slip. Two sources are associated with the hypocenter, the larger having a slip of 55-90 cm, depending on the rupture model. These sources have a radius of approximately 2-3 km and are ringed by a region of reduced slip. The aftershocks fall within this low slip annulus. Other sources with slips from 40 to 70 cm each ring the central source region and the aftershock pattern. All the sources are predominantly thrust, although some minor right-lateral strike-slip motion is seen. The overall dimensions of the Whittier earthquake from the strong motion inversions is 10 km long (along the strike) and 6 km wide (down the dip). The preferred dip is 30?? and the preferred average rupture velocity is 2.5 km/s. Moment estimates range from 7.4 to 10.0 ?? 1024 dyn cm, depending on the rupture model. -Authors
NASA Astrophysics Data System (ADS)
Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Ohsumi, T.; Morikawa, N.; Kawai, S.; Maeda, T.; Matsuyama, H.; Toyama, N.; Kito, T.; Murata, Y.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.; Hakamata, T.
2017-12-01
For the forthcoming large earthquakes along the Sagami Trough where the Philippine Sea Plate is subducting beneath the northeast Japan arc, the Earthquake Research Committee(ERC) /Headquarters for Earthquake Research Promotion, Japanese government (2014a) assessed that M7 and M8 class earthquakes will occur there and defined the possible extent of the earthquake source areas. They assessed 70% and 0% 5% of the occurrence probability within the next 30 years (from Jan. 1, 2014), respectively, for the M7 and M8 class earthquakes. First, we set possible 10 earthquake source areas(ESAs) and 920 ESAs, respectively, for M8 and M7 class earthquakes. Next, we constructed 125 characterized earthquake fault models (CEFMs) and 938 CEFMs, respectively, for M8 and M7 class earthquakes, based on "tsunami receipt" of ERC (2017) (Kitoh et al., 2016, JpGU). All the CEFMs are allowed to have a large slip area for expression of fault slip heterogeneity. For all the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. Finally, we re-distributed the occurrence probability to all CEFMs (Abe et al., 2014, JpGU) and gathered excess probabilities for variable tsunami heights, calculated from all the CEFMs, at every observation point along Pacific coast to get PTHA. We incorporated aleatory uncertainties inherent in tsunami calculation and earthquake fault slip heterogeneity. We considered two kinds of probabilistic hazard models; one is "Present-time hazard model" under an assumption that the earthquake occurrence basically follows a renewal process based on BPT distribution if the latest faulting time was known. The other is "Long-time averaged hazard model" under an assumption that earthquake occurrence follows a stationary Poisson process. We fixed our viewpoint, for example, on the probability that the tsunami height will exceed 3 meters at coastal points in next 30 years (from Jan. 1, 2014). Present-time hazard model showed relatively high possibility over 0.1% along the Boso Peninsula. Long-time averaged hazard model showed highest possibility over 3% along the Boso Peninsula and relatively high possibility over 0.1 % along wide coastal areas on Pacific side from Kii Peninsula to Fukushima prefecture.
Relating stick-slip friction experiments to earthquake source parameters
McGarr, Arthur F.
2012-01-01
Analytical results for parameters, such as static stress drop, for stick-slip friction experiments, with arbitrary input parameters, can be determined by solving an energy-balance equation. These results can then be related to a given earthquake based on its seismic moment and the maximum slip within its rupture zone, assuming that the rupture process entails the same physics as stick-slip friction. This analysis yields overshoots and ratios of apparent stress to static stress drop of about 0.25. The inferred earthquake source parameters static stress drop, apparent stress, slip rate, and radiated energy are robust inasmuch as they are largely independent of the experimental parameters used in their estimation. Instead, these earthquake parameters depend on C, the ratio of maximum slip to the cube root of the seismic moment. C is controlled by the normal stress applied to the rupture plane and the difference between the static and dynamic coefficients of friction. Estimating yield stress and seismic efficiency using the same procedure is only possible when the actual static and dynamic coefficients of friction are known within the earthquake rupture zone.
Complex earthquake rupture and local tsunamis
Geist, E.L.
2002-01-01
In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.
NASA Astrophysics Data System (ADS)
Melgar, D.; Bock, Y.; Crowell, B. W.; Haase, J. S.
2013-12-01
Computation of predicted tsunami wave heights and runup in the regions adjacent to large earthquakes immediately after rupture initiation remains a challenging problem. Limitations of traditional seismological instrumentation in the near field which cannot be objectively employed for real-time inversions and the non-unique source inversion results are a major concern for tsunami modelers. Employing near-field seismic, GPS and wave gauge data from the Mw 9.0 2011 Tohoku-oki earthquake, we test the capacity of static finite fault slip models obtained from newly developed algorithms to produce reliable tsunami forecasts. First we demonstrate the ability of seismogeodetic source models determined from combined land-based GPS and strong motion seismometers to forecast near-source tsunamis in ~3 minutes after earthquake origin time (OT). We show that these models, based on land-borne sensors only tend to underestimate the tsunami but are good enough to provide a realistic first warning. We then demonstrate that rapid ingestion of offshore shallow water (100 - 1000 m) wave gauge data significantly improves the model forecasts and possible warnings. We ingest data from 2 near-source ocean-bottom pressure sensors and 6 GPS buoys into the earthquake source inversion process. Tsunami Green functions (tGFs) are generated using the GeoClaw package, a benchmarked finite volume code with adaptive mesh refinement. These tGFs are used for a joint inversion with the land-based data and substantially improve the earthquake source and tsunami forecast. Model skill is assessed by detailed comparisons of the simulation output to 2000+ tsunami runup survey measurements collected after the event. We update the source model and tsunami forecast and warning at 10 min intervals. We show that by 20 min after OT the tsunami is well-predicted with a high variance reduction to the survey data and by ~30 minutes a model that can be considered final, since little changed is observed afterwards, is achieved. This is an indirect approach to tsunami warning, it relies on automatic determination of the earthquake source prior to tsunami simulation. It is more robust than ad-hoc approaches because it relies on computation of a finite-extent centroid moment tensor to objectively determine the style of faulting and the fault plane geometry on which to launch the heterogeneous static slip inversion. Operator interaction and physical assumptions are minimal. Thus, the approach can provide the initial conditions for tsunami simulation (seafloor motion) irrespective of the type of earthquake source and relies heavily on oceanic wave gauge measurements for source determination. It reliably distinguishes among strike-slip, normal and thrust faulting events, all of which have been observed recently to occur in subduction zones and pose distinct tsunami hazards.
Characterizing the Seismic Ocean Bottom Environment of the Bransfield Strait
NASA Astrophysics Data System (ADS)
Washington, B.; Lekic, V.; Schmerr, N. C.
2017-12-01
Ocean bottom seismometers record ground motions that result from earthquakes, anthropogenic sound sources (e.g. propellers, air gun sources, etc.), ocean waves and currents, biological activity, as well as surface processes on the sea and coastal land. Over a two-week span in April, 2001 - the Austral late fall -ten stations arranged in eleven lines were deployed beneath the Bransfield Strait along the Antarctica Peninsula to passively record data before and after an active source seismic survey. The goal of this study is to understand ocean bottom seismicity, identify centers of seismic activity and characterize possible glaciological mechanisms of icequakes and tremors. The instruments were sampled at 200Hz, allowing signals of ice-quakes, small earthquakes, and other high frequency sources to be detected and located. By visualizing the data as spectrograms, we identify and document ground vibrations excited by local earthquakes, whale songs, and those potentially due to surface processes, such as the cracking and movement of icebergs or ice shelves, including possible harmonic tremors from the ice or the volcanic arc nearby. Using relative timing of P-wave arrivals, we locate the hypocenters of nearby earthquakes and icequakes, and present frequency-dependent polarization analysis of their waveforms. Marine mammal sounds were detected in a substantial part of the overall acoustic environment-late March and Early April are the best months to hear whales such as humpback, sperm and orca communicating amongst each other because they are drawn to the cold, nutrient-rich Antarctic waters. We detect whales communicating for several hours in the dataset. Other extensively recorded sources resemble harmonic tremors, and we also identify signals possibly associated with waves set up on the notoriously stormy seas.
Teleseismic P wave coda from oceanic trench and other bathymetric features
NASA Astrophysics Data System (ADS)
Wu, W.; Ni, S.
2012-12-01
Teleseismic P waves are essential for studying rupture processes of great earthquakes, either in the back projection method or in finite fault inversion method involving of quantitative waveform modeling. In these studies, P waves are assumed to be direct P waves generated by localized patches of the ruptured fault. However, for some oceanic earthquakes happening near the subductiontrenches or mid-ocean ridges, we observed strong signals between P and PP are often observed on theat telseseismic networkdistances. These P wave coda signals show strong coherence and their amplitudes are sometimes comparable with those of the direct P wave or even higher for some special frequenciesfrequency band. With array analysis, we find that the coda's slowness is very close to that of the direct P wave, suggesting that they are generated near the source region. As the earthquakes occur near the trenches or mid-ocean ridges which are both featured by rapid variation of bathymetry, the coda waves are very probably generated by the scattered surface wave or S wave at the irregular bathymetry. Then, we apply the realistic bathymetry data to calculate the 3D synthetics and the coda can be well predicted by the synthetics. So the topography/bathymetry is confirmed to be the main source of the coda. The coda waves are so strong that it may affect the imaging rupture processes of ocean earthquakes, so the topography/bathymetry effect should be taken into account. However, these strong coda waves can also be used utilized to locate the oceanic earthquakes. The 3D synthetics demonstrate that the coda waves are dependent on both the specific bathymetry and the location of the earthquake. Given the determined bathymetry, the earthquake location can be constrained by the coda, e.g. the distance between trench and the earthquake can be determine from the relative arrival between the P wave and its coda which is generated by the trench. In order to locate the earthquakes using the bathymetry, it is indispensible to get all the 3D synthetics with possible different horizontal locations and depths of the earthquakes. However, the computation will be very expensive if using the numerical simulation in the whole medium. Considering that the complicated structure is only near the source region, we apply ray theory to interface full wave field from spectral-element simulation to get the teleseismic P waves. With this approach, computation efficiency is greatly improved and the relocation of the earthquake can be completed more efficiently. As for the relocation accuracy, it can be as high as 10km for the earthquakes near the trench. So it provides us another, sometimes most favorable, method to locate the ocean earthquakes with ground-truth accuracy.
Sources of Seismic Hazard in British Columbia: What Controls Earthquakes in the Crust?
NASA Astrophysics Data System (ADS)
Balfou, Natalie Joy
This thesis examines processes causing faulting in the North American crust in the northern Cascadia subduction zone. A combination of seismological methods, including source mechanism determination, stress inversion and earthquake relocations are used to determine where earthquakes occur and what forces influence faulting. We also determine if forces that control faulting can be monitored using seismic anisotropy. Investigating the processes that contribute to faulting in the crust is important because these earthquakes pose significant hazard to the large population centres in British Columbia and Washington State. To determine where crustal earthquakes occur we apply double-difference earthquake relocation techniques to events in the Fraser River Valley, British Columbia, and the San Juan Islands, Washington. This technique is used to identify "hidden" active structures using both catalogue and waveform cross-correlation data. Results have significantly reduced uncertainty over routine catalogue locations and show lineations in areas of clustered seismicity. In the Fraser River Valley these lineations or streaks appear to be hidden structures that do not disrupt near-surface sediments; however, in the San Juan Islands the identified lineation can be related to recently mapped surface expressions of faults. To determine forces that influence faulting we investigate the orientation and sources of stress using Bayesian inversion results from focal mechanism data. More than ˜600 focal mechanisms from crustal earthquakes are calculated to identify the dominant style of faulting and inverted to estimate the principal stress orientations and the stress ratio. Results indicate the maximum horizontal compressive stress (SHmax) orientation changes with distance from the subduction interface, from margin-normal along the coast to margin-parallel further inland. We relate the margin-normal stress direction to subduction-related strain rates due to the locked interface between the North America and Juan de Fuca plates just west of Vancouver Island. Further from the margin the plates are coupled less strongly and the margin-parallel SHmax relates to the northward push of the Oregon Block. Active faults around the region are generally thrust faults that strike east-west and might accommodate the margin- parallel compression. Finally, we consider whether crustal anisotropy can be used as a stress monitoring tool in this region. We identify sources and variations of crustal anisotropy using shear-wave splitting analysis on local crustal earthquakes. Results show spatial variations in fast directions, with margin-parallel fast directions at most stations and margin-perpendicular fast directions at stations in the northeast of the region. To use seismic anisotropy as a stress indicator requires identifying which stations are pri- marily influenced by stress. We determine the source of anisotropy at each station by comparing fast directions from shear-wave splitting results to the SHmax orientation. Most stations show agreement between these directions suggesting that anisotropy is stress-related. These stations are further analysed for temporal variations and show variation that could be associated with earthquakes (ML 3{5) and episodic tremor and slip events. The combination of earthquake relocations, source mechanisms, stress and anisotropy is unique and provides a better understanding of faulting and stress in the crust of northern Cascadia.
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.
NASA Astrophysics Data System (ADS)
Singh, A. P.; Mishra, O. P.
2015-10-01
In order to understand the processes involved in the genesis of monsoon induced micro to moderate earthquakes after heavy rainfall during the Indian summer monsoon period beneath the 2011 Talala, Saurashtra earthquake (Mw 5.1) source zone, we assimilated 3-D microstructures of the sub-surface rock materials using a data set recorded by the Seismic Network of Gujarat (SeisNetG), India. Crack attributes in terms of crack density (ε), the saturation rate (ξ) and porosity parameter (ψ) were determined from the estimated 3-D sub-surface velocities (Vp, Vs) and Poisson's ratio (σ) structures of the area at varying depths. We distinctly imaged high-ε, high-ξ and low-ψ anomalies at shallow depths, extending up to 9-15 km. We infer that the existence of sub-surface fractured rock matrix connected to the surface from the source zone may have contributed to the changes in differential strain deep down to the crust due to the infiltration of rainwater, which in turn induced micro to moderate earthquake sequence beneath Talala source zone. Infiltration of rainwater during the Indian summer monsoon might have hastened the failure of the rock by perturbing the crustal volume strain of the causative source rock matrix associated with the changes in the seismic moment release beneath the surface. Analyses of crack attributes suggest that the fractured volume of the rock matrix with high porosity and lowered seismic strength beneath the source zone might have considerable influence on the style of fault displacements due to seismo-hydraulic fluid flows. Localized zone of micro-cracks diagnosed within the causative rock matrix connected to the water table and their association with shallow crustal faults might have acted as a conduit for infiltrating the precipitation down to the shallow crustal layers following the fault suction mechanism of pore pressure diffusion, triggering the monsoon induced earthquake sequence beneath the source zone.
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
NASA Astrophysics Data System (ADS)
Jian, Pei-Ru; Hung, Shu-Huei; Meng, Lingsen; Sun, Daoyuan
2017-04-01
The 2016 Mw 6.4 Meinong earthquake struck a previously unrecognized fault zone in midcrust beneath south Taiwan and inflicted heavy causalities in the populated Tainan City about 30 km northwest of the epicenter. Because of its relatively short rupture duration and P wave trains contaminated by large-amplitude depth phases and reverberations generated in the source region, accurate characterization of the rupture process and source properties for such a shallow strong earthquake remains challenging. Here we present a first high-resolution MUltiple SIgnal Classification back projection source image by using both P and depth-phase sP waves recorded at two large and dense arrays to understand the source behavior and consequent hazards of this peculiar catastrophic event. The results further corroborated by the directivity analysis indicate a unilateral rupture propagating northwestward and slightly downward on the shallow NE-dipping fault plane. The source radiation process is primarily characterized by one single peak, 7 s duration, with a total rupture length of 17 km and average rupture speed of 2.4 km/s. The rupture terminated immediately east of the prominent off-fault aftershock cluster about 20 km northwest of the hypocenter. Synergistic amplification of ground shaking by the directivity and strong excitation of sP and reverberations mainly caused the destruction concentrated in the area further to the northwest away from the rupture zone.
NASA Astrophysics Data System (ADS)
Guo, L.; Lin, J.; Yang, H.
2017-12-01
The 11 April 2012 Mw8.6 earthquake off the coast of Sumatra in the eastern Indian Ocean was the largest strike-slip earthquake ever recorded. The 2012 mainshock and its aftershock sequences were associated with complex slip partitioning and earthquake interactions of an oblique convergent system, in a new plate boundary zone between the Indian and Australian plates. The detail processes of the earthquake interactions and correlation with seafloor geological structure, however, are still poorly known. During March-April 2017, an array of broadband OBS (ocean bottom seismometer) were deployed, for the first time, near the epicenter region of the 2012 earthquake sequence. During post-expedition data processing, we identified 70 global earthquakes from the National Earthquake Information Center (NEIC) catalog that occurred during our OBS deployment period. We then picked P and S waves in the seismic records and analyzed their arrival times. We further identified and analyzed multiple local earthquakes and examined their relationship to the observed seafloor structure (fracture zones, seafloor faults, etc.) and the state of stresses in this region of the eastern Indian Ocean. The ongoing analyses of the data obtained from this unique seismic experiment are expected to provide important constraints on the large-scale intraplate deformation in this part of the eastern Indian Ocean.
NASA Astrophysics Data System (ADS)
WANG, X.; Wei, S.; Bradley, K. E.
2017-12-01
Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike variation. The refined megathrust geometry will improve our understanding of the tectonic setting in this region, and place further constraints on rupture processes of the hazardous megathrust.
NASA Astrophysics Data System (ADS)
Mert, A.
2016-12-01
The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.
Very-long-period volcanic earthquakes beneath Mammoth Mountain, California
Hill, D.P.; Dawson, P.; Johnston, M.J.S.; Pitt, A.M.; Biasi, G.; Smith, K.
2002-01-01
Detection of three very-long-period (VLP) volcanic earthquakes beneath Mammoth Mountain emphasizes that magmatic processes continue to be active beneath this young, eastern California volcano. These VLP earthquakes, which occured in October 1996 and July and August 2000, appear as bell-shaped pulses with durations of one to two minutes on a nearby borehole dilatometer and on the displacement seismogram from a nearby broadband seismometer. They are accompanied by rapid-fire sequences of high-frequency (HF) earthquakes and several long- period (LP) volcanic earthquakes. The limited VLP data are consistent with a CLVD source at a depth of ???3 km beneath the summit, which we interpret as resulting from a slug of fluid (CO2- saturated magmatic brine or perhaps basaltic magma) moving into a crack.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
A GIS-based time-dependent seismic source modeling of Northern Iran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2017-01-01
The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.
Sources of shaking and flooding during the Tohoku-Oki earthquake: a mixture of rupture styles
Wei, Shengji; Graves, Robert; Helmberger, Don; Avouac, Jean-Philippe; Jiang, Junle
2012-01-01
Modeling strong ground motions from great subduction zone earthquakes is one of the great challenges of computational seismology. To separate the rupture characteristics from complexities caused by 3D sub-surface geology requires an extraordinary data set such as provided by the recent Mw9.0 Tohoku-Oki earthquake. Here we combine deterministic inversion and dynamically guided forward simulation methods to model over one thousand high-rate GPS and strong motion observations from 0 to 0.25 Hz across the entire Honshu Island. Our results display distinct styles of rupture with a deeper generic interplate event (~Mw8.5) transitioning to a shallow tsunamigenic earthquake (~Mw9.0) at about 25 km depth in a process driven by a strong dynamic weakening mechanism, possibly thermal pressurization. This source model predicts many important features of the broad set of seismic, geodetic and seafloor observations providing a major advance in our understanding of such great natural hazards.
How fault geometry controls earthquake magnitude
NASA Astrophysics Data System (ADS)
Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.
2016-12-01
Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.
Shakal, A.; Graizer, V.; Huang, M.; Borcherdt, R.; Haddadi, H.; Lin, K.-W.; Stephens, C.; Roffers, P.
2005-01-01
The Parkfield 2004 earthquake yielded the most extensive set of strong-motion data in the near-source region of a magnitude 6 earthquake yet obtained. The recordings of acceleration and volumetric strain provide an unprecedented document of the near-source seismic radiation for a moderate earthquake. The spatial density of the measurements alon g the fault zone and in the linear arrays perpendicular to the fault is expected to provide an exceptional opportunity to develop improved models of the rupture process. The closely spaced measurements should help infer the temporal and spatial distribution of the rupture process at much higher resolution than previously possible. Preliminary analyses of the peak a cceleration data presented herein shows that the motions vary significantly along the rupture zone, from 0.13 g to more than 2.5 g, with a map of the values showing that the larger values are concentrated in three areas. Particle motions at the near-fault stations are consistent with bilateral rupture. Fault-normal pulses similar to those observed in recent strike-slip earthquakes are apparent at several of the stations. The attenuation of peak ground acceleration with distance is more rapid than that indicated by some standard relationships but adequately fits others. Evidence for directivity in the peak acceleration data is not strong. Several stations very near, or over, the rupturing fault recorded relatively low accelerations. These recordings may provide a quantitative basis to understand observations of low near-fault shaking damage that has been reported in other large strike-slip earthquak.
NASA Astrophysics Data System (ADS)
Maeda, T.; Furumura, T.; Noguchi, S.; Takemura, S.; Iwai, K.; Lee, S.; Sakai, S.; Shinohara, M.
2011-12-01
The fault rupture of the 2011 Tohoku (Mw9.0) earthquake spread approximately 550 km by 260 km with a long source rupture duration of ~200 s. For such large earthquake with a complicated source rupture process the radiation of seismic wave from the source rupture and initiation of tsunami due to the coseismic deformation is considered to be very complicated. In order to understand such a complicated process of seismic wave, coseismic deformation and tsunami, we proposed a unified approach for total modeling of earthquake induced phenomena in a single numerical scheme based on a finite-difference method simulation (Maeda and Furumura, 2011). This simulation model solves the equation of motion of based on the linear elastic theory with equilibrium between quasi-static pressure and gravity in the water column. The height of tsunami is obtained from this simulation as a vertical displacement of ocean surface. In order to simulate seismic waves, ocean acoustics, coseismic deformations, and tsunami from the 2011 Tohoku earthquake, we assembled a high-resolution 3D heterogeneous subsurface structural model of northern Japan. The area of simulation is 1200 km x 800 km and 120 km in depth, which have been discretized with grid interval of 1 km in horizontal directions and 0.25 km in vertical direction, respectively. We adopt a source-rupture model proposed by Lee et al. (2011) which is obtained by the joint inversion of teleseismic, near-field strong motion, and coseismic deformation. For conducting such a large-scale simulation, we fully parallelized our simulation code based on a domain-partitioning procedure which achieved a good speed-up by parallel computing up to 8192 core processors with parallel efficiency of 99.839%. The simulation result demonstrates clearly the process in which the seismic wave radiates from the complicated source rupture over the fault plane and propagating in heterogeneous structure of northern Japan. Then, generation of tsunami from coseismic ground deformation at sea floor due to the earthquake and propagation is also well demonstrated . The simulation also demonstrates that a very large slip up to 40 m at shallow plate boundary near the trench pushes up sea floor with source rupture propagation, and the highly elevated sea surface gradually start propagation as tsunamis due to the gravity. The result of simulation of vertical-component displacement waveform matches the ocean-bottom pressure gauge record which is installed just above the source fault area (Maeda et al., 2011) very consistently. Strong reverberation of the ocean-acoustic waves between sea surface and sea bottom particularly near the Japan Trench for long time after the source rupture ends is confirmed in the present simulation. Accordingly, long wavetrains of high-frequency ocean acoustic waves is developed and overlap to later tsunami waveforms as we found in the observations.
Frequency-Dependent Rupture Processes for the 2011 Tohoku Earthquake
NASA Astrophysics Data System (ADS)
Miyake, H.
2012-12-01
The 2011 Tohoku earthquake is characterized by frequency-dependent rupture process [e.g., Ide et al., 2011; Wang and Mori, 2011; Yao et al., 2011]. For understanding rupture dynamics of this earthquake, it is extremely important to investigate wave-based source inversions for various frequency bands. The above frequency-dependent characteristics have been derived from teleseismic analyses. This study challenges to infer frequency-dependent rupture processes from strong motion waveforms of K-NET and KiK-net stations. The observations suggested three or more S-wave phases, and ground velocities at several near-source stations showed different arrivals of their long- and short-period components. We performed complex source spectral inversions with frequency-dependent phase weighting developed by Miyake et al. [2002]. The technique idealizes both the coherent and stochastic summation of waveforms using empirical Green's functions. Due to the limitation of signal-to-noise ratio of the empirical Green's functions, the analyzed frequency bands were set within 0.05-10 Hz. We assumed a fault plane with 480 km in length by 180 km in width with a single time window for rupture following Koketsu et al. [2011] and Asano and Iwata [2012]. The inversion revealed source ruptures expanding from the hypocenter, and generated sharp slip-velocity intensities at the down-dip edge. In addition to test the effects of empirical/hybrid Green's functions and with/without rupture front constraints on the inverted solutions, we will discuss distributions of slip-velocity intensity and a progression of wave generation with increasing frequency.
Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field
Paisley, John
2018-01-01
The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < ML < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity. PMID:29806015
Insight into the rupture process of a rare tsunami earthquake from near-field high-rate GPS
NASA Astrophysics Data System (ADS)
Macpherson, K. A.; Hill, E. M.; Elosegui, P.; Banerjee, P.; Sieh, K. E.
2011-12-01
We investigated the rupture duration and velocity of the October 25, 2010 Mentawai earthquake by examining high-rate GPS displacement data. This Mw=7.8 earthquake appears to have ruptured either an up-dip part of the Sumatran megathrust or a fore-arc splay fault, and produced tsunami run-ups on nearby islands that were out of proportion with its magnitude. It has been described as a so-called "slow tsunami earthquake", characterised by a dearth of high-frequency signal and long rupture duration in low-strength, near-surface media. The event was recorded by the Sumatran GPS Array (SuGAr), a network of high-rate (1 sec) GPS sensors located on the nearby islands of the Sumatran fore-arc. For this study, the 1 sec time series from 8 SuGAr stations were selected for analysis due to their proximity to the source and high-quality recordings of both static displacements and dynamic waveforms induced by surface waves. The stations are located at epicentral distances of between 50 and 210 km, providing a unique opportunity to observe the dynamic source processes of a tsunami earthquake from near-source, high-rate GPS. We estimated the rupture duration and velocity by simulating the rupture using the spectral finite-element method SPECFEM and comparing the synthetic time series to the observed surface waves. A slip model from a previous study, derived from the inversion of GPS static offsets and tsunami data, and the CRUST2.0 3D velocity model were used as inputs for the simulations. Rupture duration and velocity were varied for a suite of simulations in order to determine the parameters that produce the best-fitting waveforms.
NASA Astrophysics Data System (ADS)
Williamson, A.; Cummins, P. R.; Newman, A. V.; Benavente, R. F.
2016-12-01
The 2015 Illapel, Chile earthquake was recorded over a wide range of seismic, geodetic and oceanographic instruments. The USGS assigned magnitude 8.3 earthquake produced a tsunami that was recorded trans-oceanically at both tide gauges and deep-water tsunami pressure sensors. The event also generated surface deformation along the Chilean coast that was recovered through ascending and descending paths of the Sentinel-1A satellite. Additionally, seismic waves were recorded across various global seismic networks. While the determination of the rupture source through seismic and geodetic means is now commonplace and has been studied extensively in this fashion for the Illapel event, the use of tsunami datasets in the inversion process, rather than purely as a forward validation of models, is less common. In this study, we evaluate the use of both near and far field tsunami pressure gauges in the source inversion process, examining their contribution to seismic and geodetic joint inversions- as well as examine the contribution of dispersive and elastic loading parameters on the numerical tsunami propagation. We determine that the inclusion of near field tsunami pressure gauges assists in resolving the degree of slip in the near-trench environment, where purely geodetic inversions lose most resolvability. The inclusion of a far-field dataset has the potential to add further confidence to tsunami inversions, however at a high computational cost. When applied to the Illapel earthquake, this added near-trench resolvability leads to a better estimation of tsunami arrival times at near field gauges and contributes understanding to the wide variation in tsunamigenic slip present along the highly active Peru-Chile trench.
NASA Astrophysics Data System (ADS)
Kim, W.; Seeber, L.; Armbruster, J. G.
2002-12-01
On April 20, 2002, a Mw 5 earthquake occurred near the town of Au Sable Forks, northeastern Adirondacks, New York. The quake caused moderate damage (MMI VII) around the epicentral area and it is well recorded by over 50 broadband stations in the distance ranges of 70 to 2000 km in the Eastern North America. Regional broadband waveform data are used to determine source mechanism and focal depth using moment tensor inversion technique. Source mechanism indicates predominantly thrust faulting along 45° dipping fault plane striking due South. The mainshock is followed by at least three strong aftershocks with local magnitude (ML) greater than 3 and about 70 aftershocks are detected and located in the first three months by a 12-station portable seismographic network. The aftershock distribution clearly delineate the mainshock rupture to the westerly dipping fault plane at a depth of 11 to 12 km. Preliminary analysis of the aftershock waveform data indicates that orientation of the P-axis rotated 90° from that of the mainshock, suggesting a complex source process of the earthquake sequence. We achieved an important milestone in monitoring earthquakes and evaluating their hazards through rapid cross-border (Canada-US) and cross-regional (Central US-Northeastern US) collaborative efforts. Hence, staff at Instrument Software Technology, Inc. near the epicentral area joined Lamont-Doherty staff and deployed the first portable station in the epicentral area; CERI dispatched two of their technical staff to the epicentral area with four accelerometers and a broadband seismograph; the IRIS/PASSCAL facility shipped three digital seismographs and ancillary equipment within one day of the request; the POLARIS Consortium, Canada sent a field crew of three with a near real-time, satellite telemetry based earthquake monitoring system. The Polaris station, KSVO, powered by a solar panel and batteries, was already transmitting data to the central Hub in London, Ontario, Canada within a day after the field crew arrived in the Au Sable Forks area. This collaboration allowed us to maximize the scarce resources available for monitoring this damaging earthquake and its aftershocks in the Northeastern U.S.
Earthquake forecasting studies using radon time series data in Taiwan
NASA Astrophysics Data System (ADS)
Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong
2017-04-01
For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.
NASA Astrophysics Data System (ADS)
Chao, Kevin; Peng, Zhigang; Hsu, Ya-Ju; Obara, Kazushige; Wu, Chunquan; Ching, Kuo-En; van der Lee, Suzan; Pu, Hsin-Chieh; Leu, Peih-Lin; Wech, Aaron
2017-07-01
Deep tectonic tremor, which is extremely sensitive to small stress variations, could be used to monitor fault zone processes during large earthquake cycles and aseismic processes before large earthquakes. In this study, we develop an algorithm for the automatic detection and location of tectonic tremor beneath the southern Central Range of Taiwan and examine the spatiotemporal relationship between tremor and the 4 March 2010 ML6.4 Jiashian earthquake, located about 20 km from active tremor sources. We find that tremor in this region has a relatively short duration, short recurrence time, and no consistent correlation with surface GPS data. We find a short-term increase in the tremor rate 19 days before the Jiashian main shock, and around the time when the tremor rate began to rise one GPS station recorded a flip in its direction of motion. We hypothesize that tremor is driven by a slow-slip event that preceded the occurrence of the shallower Jiashian main shock, even though the inferred slip is too small to be observed by all GPS stations. Our study shows that tectonic tremor may reflect stress variation during the prenucleation process of a nearby earthquake.
Local earthquake interferometry of the IRIS Community Wavefield Experiment, Grant County, Oklahoma
NASA Astrophysics Data System (ADS)
Eddy, A. C.; Harder, S. H.
2017-12-01
The IRIS Community Wavefield Experiment was deployed in Grant County, located in north central Oklahoma, from June 21 to July 27, 2016. Data from all nodes were recorded at 250 samples per second between June 21 and July 20 along three lines. The main line was 12.5 km long oriented east-west and consisted of 129 nodes. The other two lines were 5.5 km long north-south oriented with 49 nodes each. During this time, approximately 150 earthquakes of magnitude 1.0 to 4.4 were recorded in the surrounding counties of Oklahoma and Kansas. Ideally, sources for local earthquake interferometry should be near surface events that produce high frequency body waves. Unlike ambient noise seismic interferometry (ANSI), which uses days, weeks, or even months of continuously recorded seismic data, local earthquake interferometry uses only short segments ( 2 min.) of data. Interferometry in this case is based on the cross-correlation of body wave surface multiples where the event source is translated to a reference station in the array, which acts as a virtual source. Multiples recorded between the reference station and all other stations can be cross-correlated to produce a clear seismic trace. This process will be repeated with every node acting as the reference station for all events. The resulting shot gather will then be processed and analyzed for quality and accuracy. Successful application of local earthquake interferometry will produce a crustal image with identifiable sedimentary and basement reflectors and possibly a Moho reflection. Economically, local earthquake interferometry could lower the time and resource cost of active and passive seismic surveys while improving subsurface image quality in urban settings or areas of limited access. The applications of this method can potentially be expanded with the inclusion of seismic events with a magnitude of 1.0 or lower.
Constraints on the source parameters of low-frequency earthquakes on the San Andreas Fault
Thomas, Amanda M.; Beroza, Gregory C.; Shelly, David R.
2016-01-01
Low-frequency earthquakes (LFEs) are small repeating earthquakes that occur in conjunction with deep slow slip. Like typical earthquakes, LFEs are thought to represent shear slip on crustal faults, but when compared to earthquakes of the same magnitude, LFEs are depleted in high-frequency content and have lower corner frequencies, implying longer duration. Here we exploit this difference to estimate the duration of LFEs on the deep San Andreas Fault (SAF). We find that the M ~ 1 LFEs have typical durations of ~0.2 s. Using the annual slip rate of the deep SAF and the average number of LFEs per year, we estimate average LFE slip rates of ~0.24 mm/s. When combined with the LFE magnitude, this number implies a stress drop of ~104 Pa, 2 to 3 orders of magnitude lower than ordinary earthquakes, and a rupture velocity of 0.7 km/s, 20% of the shear wave speed. Typical earthquakes are thought to have rupture velocities of ~80–90% of the shear wave speed. Together, the slow rupture velocity, low stress drops, and slow slip velocity explain why LFEs are depleted in high-frequency content relative to ordinary earthquakes and suggest that LFE sources represent areas capable of relatively higher slip speed in deep fault zones. Additionally, changes in rheology may not be required to explain both LFEs and slow slip; the same process that governs the slip speed during slow earthquakes may also limit the rupture velocity of LFEs.
von Huene, Roland E.; Miller, John J.; Weinrebe, Wilhelm
2012-01-01
Three destructive earthquakes along the Alaska subduction zone sourced transoceanic tsunamis during the past 70 years. Since it is reasoned that past rupture areas might again source tsunamis in the future, we studied potential asperities and barriers in the subduction zone by examining Quaternary Gulf of Alaska plate history, geophysical data, and morphology. We relate the aftershock areas to subducting lower plate relief and dissimilar materials in the seismogenic zone in the 1964 Kodiak and adjacent 1938 Semidi Islands earthquake segments. In the 1946 Unimak earthquake segment, the exposed lower plate seafloor lacks major relief that might organize great earthquake rupture. However, the upper plate contains a deep transverse-trending basin and basement ridges associated with the Eocene continental Alaska convergent margin transition to the Aleutian island arc. These upper plate features are sufficiently large to have affected rupture propagation. In addition, massive slope failure in the Unimak area may explain the local 42-m-high 1946 tsunami runup. Although Quaternary geologic and tectonic processes included accretion to form a frontal prism, the study of seismic images, samples, and continental slope physiography shows a previous history of tectonic erosion. Implied asperities and barriers in the seismogenic zone could organize future great earthquake rupture.
NASA Astrophysics Data System (ADS)
Pedraza, P.; Poveda, E.; Blanco Chia, J. F.; Zahradnik, J.
2013-05-01
On September 30th, 2012, an earthquake of magnitude Mw 7.2 occurred at the depth of ~170km in the southeast of Colombia. This seismic event is associated to the Nazca plate drifting eastward relative the South America plate. The distribution of seismicity obtained by the National Seismological Network of Colombia (RSNC) since 1993 shows a segmented subduction zone with varying dip angles. The earthquake occurred in a seismic gap zone of intermediate depth. The recent deployment of broadband seismic stations on the Colombian, as a part of the Colombian Seismological Network, operated by the Colombian Survey, has provided high-quality data to study rupture process. We estimated the moment tensor, the centroid position, and the source time function. The parameters were obtained by inverting waveforms recorded by RSNC at distances 100 km to 800 km, and modeled at 0.01-0.09Hz, using different 1D crustal models, taking advantage of the ISOLA code. The DC-percentage of the earthquake is very high (~90%). The focal mechanism is mostly normal, hence the determination of the fault plane is challenging. An attempt to determine the fault plane was made based on mutual relative position of the centroid and hypocenter (H-C method). Studies in progress are devoted to searching possible complexity of the fault rupture process (total duration of about 15 seconds), quantified by multiple-point source models.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
NASA Astrophysics Data System (ADS)
Rolland, Lucie M.; Vergnolle, Mathilde; Nocquet, Jean-Mathieu; Sladen, Anthony; Dessa, Jean-Xavier; Tavakoli, Farokh; Nankali, Hamid Reza; Cappa, FréDéRic
2013-06-01
It has previously been suggested that ionospheric perturbations triggered by large dip-slip earthquakes might offer additional source parameter information compared to the information gathered from land observations. Based on 3D modeling of GPS- and GLONASS-derived total electron content signals recorded during the 2011 Van earthquake (thrust, intra-plate event, Mw = 7.1, Turkey), we confirm that coseismic ionospheric signals do contain important information about the earthquake source, namely its slip mode. Moreover, we show that part of the ionospheric signal (initial polarity and amplitude distribution) is not related to the earthquake source, but is instead controlled by the geomagnetic field and the geometry of the Global Navigation Satellite System satellites constellation. Ignoring these non-tectonic effects would lead to an incorrect description of the earthquake source. Thus, our work emphasizes the added caution that should be used when analyzing ionospheric signals for earthquake source studies.
NASA Astrophysics Data System (ADS)
Rolland, L. M.; Vergnolle, M.; Nocquet, J.; Sladen, A.; Dessa, J.; Tavakoli, F.; Nankali, H.; Cappa, F.
2013-12-01
It has previously been suggested that ionospheric perturbations triggered by large dip-slip earthquakes might offer additional source parameter information compared to the information gathered from land observations. Based on 3D modeling of GPS- and GLONASS-derived total electron content signals recorded during the 2011 Van earthquake (thrust, intra-plate event, Mw = 7.1, Turkey), we confirm that coseismic ionospheric signals do contain important information about the earthquake source, namely its slip mode. Moreover, we show that part of the ionospheric signal (initial polarity and amplitude distribution) is not related to the earthquake source, but is instead controlled by the geomagnetic field and the geometry of the Global Navigation Satellite System satellites constellation. Ignoring these non-tectonic effects would lead to an incorrect description of the earthquake source. Thus, our work emphasizes the added caution that should be used when analyzing ionospheric signals for earthquake source studies.
McGarr, Arthur F.; Boettcher, M.; Fletcher, Jon Peter B.; Sell, Russell; Johnston, Malcolm J.; Durrheim, R.; Spottiswoode, S.; Milev, A.
2009-01-01
For one week during September 2007, we deployed a temporary network of field recorders and accelerometers at four sites within two deep, seismically active mines. The ground-motion data, recorded at 200 samples/sec, are well suited to determining source and ground-motion parameters for the mining-induced earthquakes within and adjacent to our network. Four earthquakes with magnitudes close to 2 were recorded with high signal/noise at all four sites. Analysis of seismic moments and peak velocities, in conjunction with the results of laboratory stick-slip friction experiments, were used to estimate source processes that are key to understanding source physics and to assessing underground seismic hazard. The maximum displacements on the rupture surfaces can be estimated from the parameter , where is the peak ground velocity at a given recording site, and R is the hypocentral distance. For each earthquake, the maximum slip and seismic moment can be combined with results from laboratory friction experiments to estimate the maximum slip rate within the rupture zone. Analysis of the four M 2 earthquakes recorded during our deployment and one of special interest recorded by the in-mine seismic network in 2004 revealed maximum slips ranging from 4 to 27 mm and maximum slip rates from 1.1 to 6.3 m/sec. Applying the same analyses to an M 2.1 earthquake within a cluster of repeating earthquakes near the San Andreas Fault Observatory at Depth site, California, yielded similar results for maximum slip and slip rate, 14 mm and 4.0 m/sec.
Strong Ground Motion Prediction By Composite Source Model
NASA Astrophysics Data System (ADS)
Burjanek, J.; Irikura, K.; Zahradnik, J.
2003-12-01
A composite source model, incorporating different sized subevents, provides a possible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock). The subevents are distributed randomly over the fault. Each subevent is modeled either as a finite or point source, differences between these choices are shown. The final slip and duration of each subevent is related to its characteristic dimension, using constant stress-drop scaling. Absolute value of subevents' stress drop is free parameter. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally layered crustal model. An estimation of subevents' stress drop is based on fitting empirical attenuation relations for PGA and PGV, as they represent robust information on strong ground motion caused by earthquakes, including both path and source effect. We use the 2000 M6.6 Western Tottori, Japan, earthquake as validation event, providing comparison between predicted and observed waveforms.
A//r//m//s AND SEISMIC SOURCE STUDIES.
Hanks, T.C.; ,
1984-01-01
This paper briefly summarizes some recent developments in studies of seismic source parameter estimation, emphasizing the essential similarities between mining-induced seismogenic-failure and naturally occurring, tectonically driven earthquakes. The root-mean-square acceleration, a//r//m//s, shows much promise as an observational measure of high-frequency ground motion; it is very stable observationally, is insensitive to radiation pattern, and can be related linearly to the dynamic stress differences arising in the faulting process. To interpret a//r//m//s correctly, however, requires knowledge of f//m//a//x, the high-frequency band-limitation of the radiated field of earthquakes. As a practical matter, f//m//a//x can be due to any number of causes, but an essential ambiguity is whether or not f//m//a//x can arise from source properties alone. The interaction of the aftershocks of the Oroville, California, earthquake illustrates how a//r//m//s stress drops may be connected to detailed seismicity patterns.
Shallow very-low-frequency earthquakes accompany slow slip events in the Nankai subduction zone.
Nakano, Masaru; Hori, Takane; Araki, Eiichiro; Kodaira, Shuichi; Ide, Satoshi
2018-03-14
Recent studies of slow earthquakes along plate boundaries have shown that tectonic tremor, low-frequency earthquakes, very-low-frequency events (VLFEs), and slow-slip events (SSEs) often accompany each other and appear to share common source faults. However, the source processes of slow events occurring in the shallow part of plate boundaries are not well known because seismic observations have been limited to land-based stations, which offer poor resolution beneath offshore plate boundaries. Here we use data obtained from seafloor observation networks in the Nankai trough, southwest of Japan, to investigate shallow VLFEs in detail. Coincident with the VLFE activity, signals indicative of shallow SSEs were detected by geodetic observations at seafloor borehole observatories in the same region. We find that the shallow VLFEs and SSEs share common source regions and almost identical time histories of moment release. We conclude that these slow events arise from the same fault slip and that VLFEs represent relatively high-frequency fluctuations of slip during SSEs.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
NASA Astrophysics Data System (ADS)
Imai, K.; Sugawara, D.; Takahashi, T.
2017-12-01
A large flow caused by tsunami transports sediments from beach and forms tsunami deposits in land and coastal lakes. A tsunami deposit has been found in their undisturbed on coastal lakes especially. Okamura & Matsuoka (2012) found some tsunami deposits in the field survey of coastal lakes facing to the Nankai trough, and tsunami deposits due to the past eight Nankai Trough megathrust earthquakes they identified. The environment in coastal lakes is stably calm and suitable for tsunami deposits preservation compared to other topographical conditions such as plains. Therefore, there is a possibility that the recurrence interval of megathrust earthquakes and tsunamis will be discussed with high resolution. In addition, it has been pointed out that small events that cannot be detected in plains could be separated finely (Sawai, 2012). Various aspects of past tsunami is expected to be elucidated, in consideration of topographical conditions of coastal lakes by using the relationship between the erosion-and-sedimentation process of the lake bottom and the external force of tsunami. In this research, numerical examination based on tsunami sediment transport model (Takahashi et al., 1999) was carried out on the site Ryujin-ike pond of Ohita, Japan where tsunami deposit was identified, and deposit migration analysis was conducted on the tsunami deposit distribution process of historical Nankai Trough earthquakes. Furthermore, examination of tsunami source conditions is possibly investigated by comparison studies of the observed data and the computation of tsunami deposit distribution. It is difficult to clarify details of tsunami source from indistinct information of paleogeographical conditions. However, this result shows that it can be used as a constraint condition of the tsunami source scale by combining tsunami deposit distribution in lakes with computation data.
NASA Astrophysics Data System (ADS)
Melgar, Diego; Geng, Jianghui; Crowell, Brendan W.; Haase, Jennifer S.; Bock, Yehuda; Hammond, William C.; Allen, Richard M.
2015-07-01
Real-time high-rate geodetic data have been shown to be useful for rapid earthquake response systems during medium to large events. The 2014 Mw6.1 Napa, California earthquake is important because it provides an opportunity to study an event at the lower threshold of what can be detected with GPS. We show the results of GPS-only earthquake source products such as peak ground displacement magnitude scaling, centroid moment tensor (CMT) solution, and static slip inversion. We also highlight the retrospective real-time combination of GPS and strong motion data to produce seismogeodetic waveforms that have higher precision and longer period information than GPS-only or seismic-only measurements of ground motion. We show their utility for rapid kinematic slip inversion and conclude that it would have been possible, with current real-time infrastructure, to determine the basic features of the earthquake source. We supplement the analysis with strong motion data collected close to the source to obtain an improved postevent image of the source process. The model reveals unilateral fast propagation of slip to the north of the hypocenter with a delayed onset of shallow slip. The source model suggests that the multiple strands of observed surface rupture are controlled by the shallow soft sediments of Napa Valley and do not necessarily represent the intersection of the main faulting surface and the free surface. We conclude that the main dislocation plane is westward dipping and should intersect the surface to the east, either where the easternmost strand of surface rupture is observed or at the location where the West Napa fault has been mapped in the past.
Systematic Observations of the Slip-pulse Properties of Large Earthquake Ruptures
NASA Astrophysics Data System (ADS)
Melgar, D.; Hayes, G. P.
2017-12-01
In earthquake dynamics there are two end member models of rupture: propagating cracks and self-healing pulses. These arise due to different properties of ruptures and have implications for seismic hazard; rupture mode controls near-field strong ground motions. Past studies favor the pulse-like mode of rupture, however, due to a variety of limitations, it has proven difficult to systematically establish their kinematic properties. Here we synthesize observations from a database of >150 rupture models of earthquakes spanning M7-M9 processed in a uniform manner and show the magnitude scaling properties (rise time, pulse width, and peak slip rate) of these slip pulses indicates self-similarity. Self similarity suggests a weak form of rupture determinism, where early on in the source process broader, higher amplitude slip pulses will distinguish between events of icnreasing magnitude. Indeed, we find by analyzing the moment rate functions that large and very large events are statistically distinguishable relatively early (at 15 seconds) in the rupture process. This suggests that with dense regional geophysical networks strong ground motions from a large rupture can be identified before their onset across the source region.
NASA Astrophysics Data System (ADS)
Gunawan, I.; Cummins, P. R.; Ghasemi, H.; Suhardjono, S.
2012-12-01
Indonesia is very prone to natural disasters, especially earthquakes, due to its location in a tectonically active region. In September-October 2009 alone, intraslab and crustal earthquakes caused the deaths of thousands of people, severe infrastructure destruction and considerable economic loss. Thus, both intraslab and crustal earthquakes are important sources of earthquake hazard in Indonesia. Analysis of response spectra for these intraslab and crustal earthquakes are needed to yield more detail about earthquake properties. For both types of earthquakes, we have analysed available Indonesian seismic waveform data to constrain source and path parameters - i.e., low frequency spectral level, Q, and corner frequency - at reference stations that appear to be little influenced by site response.. We have considered these analyses for the main shocks as well as several aftershocks. We obtain corner frequencies that are reasonably consistent with the constant stress drop hypothesis. Using these results, we consider using them to extract information about site response form other stations form the Indonesian strong motion network that appear to be strongly affected by site response. Such site response data, as well as earthquake source parameters, are important for assessing earthquake hazard in Indonesia.
Uchida, N.; Matsuzawa, T.; Ellsworth, W.L.; Imanishi, K.; Okada, T.; Hasegawa, A.
2007-01-01
We determine the source parameters of a M4.9 ?? 0.1 'characteristic earthquake' sequence and its accompanying microearthquakes at ???50 km depth on the subduction plate boundary offshore of Kamaishi, NE Japan. The microearthquakes tend to occur more frequently in the latter half of the recurrence intervals of the M4.9 ?? 0.1 events. Our results show that the microearthquakes are repeating events and they are located not only around but also within the slip area for the 2001 M4.8 event. From the hierarchical structure of slip areas and smaller stress drops for the microearthquakes compared to the M4.8 event, we infer the small repeating earthquakes rupture relatively weak patches in and around the slip area for the M4.8 event and their activity reflects a stress concentration process and/or change in frictional property (healing) at the area. We also infer the patches for the M4.9 ?? 0.1 and other repeating earthquakes undergo aseismic slip during their interseismic period. Copyright 2007 by the American Geophysical Union.
Karmakar, Somenath; Rathore, Abhilakh Singh; Kadri, Syed Manzoor; Dutt, Som; Khare, Shashi; Lal, Shiv
2008-10-01
An earthquake struck Kashmir on 8 October 2005. A central team of public health specialists was sent to Kashmir to assess the public health measures required following the earthquake, and to assist in institution of public health measures. Epidemiological and environmental investigation in Tangdar block (Kupwara district) and Uri Tehsil (Baramula district). Visits to villages affected by the earthquake, rehabilitation camps and health care, examination of cases with acute diarrhoeal disease (ADD), environmental observations, collection of clinical samples from ADD cases and environmental samples from drinking water sources, and laboratory methods. In total, 1783 cases of ADD were reported between 14 October and 17 December 2005 in Tangdar (population 65000). The overall attack rate was 20% in children under 4 years of age. Twelve cases of ADD with loose motions without blood were studied, and 11 rectal swabs and one stool sample were processed. No bacterial enteropathogens could be isolated, but three of the 12 samples yielded rotavirus antigen on enzyme-linked immunosorbent assay. Twelve of 13 (92.3%) water samples, collected from various stream or tap water (source: spring/stream) sources, were unsatisfactory (P=0.001) using the H(2)S strip method compared with other sources (well/mineral water). All eight water sources in Tangdar block were unsatisfactory, indicated by blackening of H(2)S filter paper strips. Following the earthquake, drinking stream water or tap water without boiling or chlorination may have led to a common source water-borne outbreak of rotavirus gastroenteritis. Other contributing factors were: overcrowding; poor sanitation; open-air defaecation; poor hygiene; and living in makeshift camps near streams. Person-to-person transmission may also have contributed to perpetuation of the outbreak. Following the establishment of medical camps and information, education and communication regarding the need to drink boiled water and follow safer hygienic practices, the outbreak was brought under control. The earthquake in Kashmir in 2005 led to widespread contamination of drinking water sources such as stream and tap water (source: stream or spring). This appears to have led to a common source outbreak of rotavirus between October and December 2005, leading to ADD, amongst infants and small children, transmitted by the faecal-oral route and perpetuated by person-to-person transmission.
Energy-to-Moment ratios for Deep Earthquakes: No evidence for scofflaws
NASA Astrophysics Data System (ADS)
Saloor, N.; Okal, E. A.
2015-12-01
Energy-to-moment ratios can provide information on the distribution of seismic source spectrum between high and low frequencies, and thus identify anomalous events (either "slow" or "snappy") whose source violates seismic scaling laws, the former characteristic of the so-called tsunami earthquakes (e.g., Mentawai, 2010), the latter featuring enhanced acceleration and destruction (e.g., Christchurch, 2011). We extend to deep earthquakes the concept of the slowness paramete, Θ=log10EE/M0, introduced by Newman and Okal [1998], where the estimated energy EE is computed for an average focal mechanism and depth (in the range 300-690 km). We find that only minor modifications of the algorithm are necessary to adapt it to deep earthquakes. The analysis of a dataset of 160 deep earthquakes from the past 30 years show that these events scale with an average Θ=-4.34±0.31, corresponding to slightly greater strain release than for their shallow counterparts. However, the most important result to date is that we have not found any "outliers", i.e., violating this trend by one or more logarithmic units, as was the case for the slow events at shallow depths. This indicates that the processes responsible for such variations in energy distribution in the source spectrum of shallow earthquakes, are absent from their deep counterparts, suggesting, perhaps not unexpectedly, that the deep seismogenic zones feature more homogeneous properties than shallow ones. This includes the large event of 30 May 2015 below the Bonin Islands (Θ=-4.13), which took place both deeper than, and oceanwards of, the otherwise documented Wadati-Benioff Zone.
NASA Astrophysics Data System (ADS)
Ohnaka, M.
2004-12-01
For the past four decades, great progress has been made in understanding earthquake source processes. In particular, recent progress in the field of the physics of earthquakes has contributed substantially to unraveling the earthquake generation process in quantitative terms. Yet, a fundamental problem remains unresolved in this field. The constitutive law that governs the behavior of earthquake ruptures is the basis of earthquake physics, and the governing law plays a fundamental role in accounting for the entire process of an earthquake rupture, from its nucleation to the dynamic propagation to its arrest, quantitatively in a unified and consistent manner. Therefore, without establishing the rational constitutive law, the physics of earthquakes cannot be a quantitative science in a true sense, and hence it is urgent to establish the rational constitutive law. However, it has been controversial over the past two decades, and it is still controversial, what the constitutive law for earthquake ruptures ought to be, and how it should be formulated. To resolve the controversy is a necessary step towards a more complete, unified theory of earthquake physics, and now the time is ripe to do so. Because of its fundamental importance, we have to discuss thoroughly and rigorously what the constitutive law ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid evidence. There are prerequisites for the constitutive formulation. The brittle, seismogenic layer and individual faults therein are characterized by inhomogeneity, and fault inhomogeneity has profound implications for earthquake ruptures. In addition, rupture phenomena including earthquakes are inherently scale dependent; indeed, some of the physical quantities inherent in rupture exhibit scale dependence. To treat scale-dependent physical quantities inherent in the rupture over a broad scale range quantitatively in a unified and consistent manner, it is critical to formulate the governing law properly so as to incorporate the scaling property. Thus, the properties of fault inhomogeneity and physical scaling are indispensable prerequisites to be incorporated into the constitutive formulation. Thorough discussion in this context necessarily leads to the consistent conclusion that the constitutive law must be formulated in such a manner that the shear traction is a primary function of the slip displacement, with the secondary effect of slip rate or stationary contact time. This constitutive formulation makes it possible to account for the entire process of an earthquake rupture over a broad scale range quantitatively in a unified and consistent manner.
Earthquake source tensor inversion with the gCAP method and 3D Green's functions
NASA Astrophysics Data System (ADS)
Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.
2013-12-01
We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.
Source and Aftershock Analysis of a Large Deep Earthquake in the Tonga Flat Slab
NASA Astrophysics Data System (ADS)
Cai, C.; Wiens, D. A.; Warren, L. M.
2013-12-01
The 9 November 2009 (Mw 7.3) deep focus earthquake (depth = 591 km) occurred in the Tonga flat slab region, which is characterized by limited seismicity but has been imaged as a flat slab in tomographic imaging studies. In addition, this earthquake occurred immediately beneath the largest of the Fiji Islands and was well recorded by a temporary array of 16 broadband seismographs installed in Fiji and Tonga, providing an excellent opportunity to study the source mechanism of a deep earthquake in a partially aseismic flat slab region. We determine the positions of main shock hypocenter, its aftershocks and moment release subevents relative to the background seismicity using a hypocentroidal decomposition relative relocation method. We also investigate the rupture directivity by measuring the variation of rupture durations at different azimuth [e.g., Warren and Silver, 2006]. Arrival times picked from the local seismic stations together with teleseismic arrival times from the International Seismological Centre (ISC) are used for the relocation. Teleseismic waveforms are used for directivity study. Preliminary results show this entire region is relatively aseismic, with diffuse background seismicity distributed between 550-670 km. The main shock happened in a previously aseismic region, with only 1 small earthquake within 50 km during 1980-2012. 11 aftershocks large enough for good locations all occurred within the first 24 hours following the earthquake. The aftershock zone extends about 80 km from NW to SE, covering a much larger area than the mainshock rupture. The aftershock distribution does not correspond to the main shock fault plane, unlike the 1994 March 9 (Mw 7.6) Fiji-Tonga earthquake in the steeply dipping, highly seismic part of the Tonga slab. Mainshock subevent locations suggest a sub-horizontal SE-NW rupture direction. However, the directivity study shows a complicated rupture process which could not be solved with simple rupture assumption. We will present the result of this example earthquake and some other deep earthquakes at the fall meeting. Warren, L. M., and P. G. Silver (2006), Measurement of differential rupture durations as constraints on the source finiteness of deep earthquakes, J. Geophys. Res., 111, B06304, doi:10.1029/2005JB004001.
Bohnhoff, Marco; Dresen, Georg; Ellsworth, William L.; Ito, Hisao; Cloetingh, Sierd; Negendank, Jörg
2010-01-01
An important discovery in crustal mechanics has been that the Earth’s crust is commonly stressed close to failure, even in tectonically quiet areas. As a result, small natural or man-made perturbations to the local stress field may trigger earthquakes. To understand these processes, Passive Seismic Monitoring (PSM) with seismometer arrays is a widely used technique that has been successfully applied to study seismicity at different magnitude levels ranging from acoustic emissions generated in the laboratory under controlled conditions, to seismicity induced by hydraulic stimulations in geological reservoirs, and up to great earthquakes occurring along plate boundaries. In all these environments the appropriate deployment of seismic sensors, i.e., directly on the rock sample, at the earth’s surface or in boreholes close to the seismic sources allows for the detection and location of brittle failure processes at sufficiently low magnitude-detection threshold and with adequate spatial resolution for further analysis. One principal aim is to develop an improved understanding of the physical processes occurring at the seismic source and their relationship to the host geologic environment. In this paper we review selected case studies and future directions of PSM efforts across a wide range of scales and environments. These include induced failure within small rock samples, hydrocarbon reservoirs, and natural seismicity at convergent and transform plate boundaries. Each example represents a milestone with regard to bridging the gap between laboratory-scale experiments under controlled boundary conditions and large-scale field studies. The common motivation for all studies is to refine the understanding of how earthquakes nucleate, how they proceed and how they interact in space and time. This is of special relevance at the larger end of the magnitude scale, i.e., for large devastating earthquakes due to their severe socio-economic impact.
Demonstration of improved seismic source inversion method of tele-seismic body wave
NASA Astrophysics Data System (ADS)
Yagi, Y.; Okuwaki, R.
2017-12-01
Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.
NASA Astrophysics Data System (ADS)
Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.
2013-12-01
We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al., 2011; Ide at al., 2011; Yagi and Fukahata, 2011; Suzuki et al., 2011). The comparison of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms, suggested that there was the time period common to both seismic wave generation and tsunami generation followed by the time period unique to tsunami generation. At this point, we think that comparison of the absolute values of moment rates is not so meaningful between tsunami waveform inversion and seismic waveform inversion, because of general ambiguity of rigidity values of each subfault in the fault region (assuming the rigidity value of 30 GPa of Yoshida et al (2011)). Considering this, the normalized value of moment rate function was also evaluated and it does not change the general feature of two moment rate functions in terms of duration property. Furthermore, the results suggested that tsunami generation process apparently took more time than seismic wave generation process did. Tsunami can be generated even by "extra" motions resulting from many suggested abnormal mechanisms. These extra motions may be attribute to the relatively larger-scale tsunami generation than expected from the magnitude level from seismic ground motion, and attribute to the longer duration of tsunami generation process.
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.
Precursory Anomaly in VLF/LF Recordings Prior to the July 30th, 2009
NASA Astrophysics Data System (ADS)
Buyuksarac, Aydin; Pınar, Ali; Kosaroglu, Sinan
2010-05-01
An international project network consisting of five receivers for sampling LF and VLF radio signals has been going on to record the data in Europe from different transmission stations around the World. One of them was established in Resadiye, Turkey, located just on the North Anatolian Fault Zone. The receiver works in VLF (16.4, 21.75, 37.5 and 45.9 kHz) and LF (153, 180, 183, 216 and 270 kHz) bands monitoring ten frequencies with one minute sampling interval. An earthquake of Mw = 4.9 took place 225 km away from the VLF/LF station at the eastern tip of the Erzincan basin at 4 km depth on July 30, 2009. We observed some anomalies on the radio signals (37.5 and 153 kHz) that initiated about 7 days before the earthquake and disappeared soon after the earthquake. We attribute this anomaly to the Mw=4.9 earthquake as a seismo-electromagnetic precursor. The radio anomaly that appeared 7 days before the occurrence of the 2009 Erzincan earthquake is in good agreement with other results indicating precursory anomalies in the project network mostly observed in seismically active countries such as Italy and Greece. Several data processing stages were applied to the data. Firstly, we processed the time series of the radio signals to understand how the frequency content of the anomaly differs from that of the normal trend. For this purpose we selected two time windows; one covering the anomaly period and the other spanning a normal period. The selected time window length was a 6 day. The sampling interval and the length of the time window limit the observed spectra from 120 seconds to six days. We identified a significant bias (drop) for the signal energy of the anomaly period at the whole frequency band. Secondly, in order to clearly depict the anomaly we estimated the daily Rayleigh Energy of the calculated spectra following the Parseval's theorem. We initiated the estimations well before the anomaly period. Such calculations gave an obvious sign for the impending event. Thirdly, we constructed a spectrogram including the whole frequency band of the data from fortnight before the earthquake to a week after the earthquake. The strongest anomaly in the spectrogram was identified for the periods larger than 60 hours. In earthquake prediction studies it is crucial to understand the source of the anomaly. Since the sources of the anomaly we are interested in are the earthquakes we tried to derive information on the properties of the earthquake that generated our anomaly in the radio signals. Within this frame, we analyzed the broadband data at several local seismic stations that recorded the event and estimated source parameters such as centroid moment tensor, source radius and stress drop. Our analysis shows that the event was a shallow one showing predominantly normal faulting mechanism and was associated with extremely high stress drop with an average value of about 250 bars.
Dilational processes accompanying earthquakes in the Long Valley Caldera
Dreger, Douglas S.; Tkalcic, Hrvoje; Johnston, M.
2000-01-01
Regional distance seismic moment tensor determinations and broadband waveforms of moment magnitude 4.6 to 4.9 earthquakes from a November 1997 Long Valley Caldera swarm, during an inflation episode, display evidence of anomalous seismic radiation characterized by non-double couple (NDC) moment tensors with significant volumetric components. Observed coseismic dilation suggests that hydrothermal or magmatic processes are directly triggering some of the seismicity in the region. Similarity in the NDC solutions implies a common source process, and the anomalous events may have been triggered by net fault-normal stress reduction due to high-pressure fluid injection or pressurization of fluid-saturated faults due to magmatic heating.
SEISRISK II; a computer program for seismic hazard estimation
Bender, Bernice; Perkins, D.M.
1982-01-01
The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.
Constraints on the rupture process of the 17 August 1999 Izmit earthquake
NASA Astrophysics Data System (ADS)
Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.
2003-04-01
Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.
Laboratory generated M -6 earthquakes
McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.
2014-01-01
We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.
Earthquake prognosis:cause for failure and ways for the problem solution
NASA Astrophysics Data System (ADS)
Kondratiev, O.
2003-04-01
Despite of the more than 50-years history of the development of the prognosis earthquake method this problem is yet not to be resolved. This makes one to have doubt in rightness of the chosen approaches retrospective search of the diverse earthquake precursors. It is obvious to speak of long-term, middle-term and short-term earthquake prognosis. They all have a probabilistic character and it would be more correct to consider them as related to the seismic hazard prognosis. In distinction of them, the problem of the operative prognosis is being discussed in report. The operative prognosis should conclude the opportune presenting of the seismic alarm signal of the place, time and power of the earthquake in order to take necessary measures for maximal mitigation of the catastrophic consequence of this event. To do this it is necessary to predict the earthquake location with accuracy of first dozens of kilometres, time of its occurrence with accuracy of the first days and its power with accuracy of the magnitude units. If the problem is formulated in such a way, it cannot principally be resolved in the framework of the concept of the indirect earthquake precursors using. It is necessary to pass from the concept of the passive observatory network to the concept of the object-oriented search of the potential source zones and direct information obtaining on the parameter medium changes within these zones in the process of the earthquake preparation and development. While formulated in this way, the problem becomes a integrated task for the planet and prospecting geophysics. To detect the source zones it is possible to use the method of the converted waves of earthquakes, for monitoring - seismic reflecting and method of the common point. Arrangement of these and possible other geophysical methods should be provided by organising the special integrated geophysic expedition of the rapid response on the occurred strong earthquakes and conducting purposeful investigation within their epicentral zones. As a result the data on understanding of the geodynamic processes of the preparation and realisation of the catastrophic earthquakes will be obtained. And only in this way all the questions of the operative prognosis may be solved basing on the reliable scientific ground. The proposed approach for the operative earthquake prognosis is not the simple and prompt one. However considering the time and efforts which were already spent to the earthquake precursor search it may expect that the new approach would be more direct and effective.
High-frequency seismic signals associated with glacial earthquakes in Greenland
NASA Astrophysics Data System (ADS)
Olsen, K.; Nettles, M.
2017-12-01
Glacial earthquakes are magnitude 5 seismic events generated by iceberg calving at marine-terminating glaciers. They are characterized by teleseismically detectable signals at 35-150 seconds period that arise from the rotation and capsize of gigaton-sized icebergs (e.g., Ekström et al., 2003; Murray et al., 2015). Questions persist regarding the details of this calving process, including whether there are characteristic precursory events such as ice slumps or pervasive crevasse opening before an iceberg rotates away from the glacier. We investigate the high-frequency seismic signals produced before, during, and after glacial earthquakes. We analyze a set of 94 glacial earthquakes that occurred at three of Greenland's major glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier, from 2001 - 2013. We employ data from the GLISN network of broadband seismometers around Greenland and from short-term seismic deployments located close to the glaciers. These data are bandpass filtered to 3 - 10 Hz and trimmed to one-hour windows surrounding known glacial earthquakes. We observe elevated amplitudes of the 3 - 10 Hz signal for 500 - 1500 seconds spanning the time of each glacial earthquake. These durations are long compared to the 60 second glacial-earthquake source. In the majority of cases we observe an increase in the amplitude of the 3 - 10 Hz signal 200 - 600 seconds before the centroid time of the glacial earthquake and sustained high amplitudes for up to 800 seconds after. In some cases, high-amplitude energy in the 3 - 10 Hz band precedes elevated amplitudes in the 35 - 150 s band by 300 seconds. We explore possible causes for these high-frequency signals, and discuss implications for improving understanding of the glacial-earthquake source.
Earthquake activity along the Himalayan orogenic belt
NASA Astrophysics Data System (ADS)
Bai, L.; Mori, J. J.
2017-12-01
The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.
2016-04-01
Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched field processing.
Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA
NASA Astrophysics Data System (ADS)
Lorito, S.
2013-05-01
The occurrence of several mega-thrust tsunamigenic earthquakes in the last decade, including but not limited to the 2004 Sumatra-Andaman, the 2010 Maule, and 2011 Tohoku earthquakes, has been a dramatic reminder of the limitations in our capability of assessing earthquake and tsunami hazard and risk. However, the increasingly high-quality geophysical observational networks allowed the retrieval of most accurate than ever models of the rupture process of mega-thrust earthquakes, thus paving the way for future improved hazard assessments. Probabilistic Tsunami Hazard Analysis (PTHA) methodology, in particular, is less mature than its seismic counterpart, PSHA. Worldwide recent research efforts of the tsunami science community allowed to start filling this gap, and to define some best practices that are being progressively employed in PTHA for different regions and coasts at threat. In the first part of my talk, I will briefly review some rupture models of recent mega-thrust earthquakes, and highlight some of their surprising features that likely result in bigger error bars associated to PTHA results. More specifically, recent events of unexpected size at a given location, and with unexpected rupture process features, posed first-order open questions which prevent the definition of an heterogeneous rupture probability along a subduction zone, despite of several recent promising results on the subduction zone seismic cycle. In the second part of the talk, I will dig a bit more into a specific ongoing effort for improving PTHA methods, in particular as regards epistemic and aleatory uncertainties determination, and the computational PTHA feasibility when considering the full assumed source variability. Only logic trees are usually explicated in PTHA studies, accounting for different possible assumptions on the source zone properties and behavior. The selection of the earthquakes to be actually modelled is then in general made on a qualitative basis or remains implicit, despite different methods like event trees have been used for different applications. I will define a quite general PTHA framework, based on the mixed use of logic and event trees. I will first discuss a particular class of epistemic uncertainties, i.e. those related to the parametric fault characterization in terms of geometry, kinematics, and assessment of activity rates. A systematic classification in six justification levels of epistemic uncertainty related with the existence and behaviour of fault sources will be presented. Then, a particular branch of the logic tree is chosen in order to discuss just the aleatory variability of earthquake parameters, represented with an event tree. Even so, PTHA based on numerical scenarios is a too demanding computational task, particularly when probabilistic inundation maps are needed. For trying to reduce the computational burden without under-representing the source variability, the event tree is first constructed by taking care of densely (over-)sampling the earthquake parameter space, and then the earthquakes are filtered basing on their associated tsunami impact offshore, before calculating inundation maps. I'll describe this approach by means of a case study in the Mediterranean Sea, namely the PTHA for some locations of Eastern Sicily coasts and Southern Crete coast due to potential subduction earthquakes occurring on the Hellenic Arc.
Real-Time Joint Streaming Data Processing from Social and Physical Sensors
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.
2014-12-01
The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction with the development of innovative computing algorithms, when combined with sensor data can provide a new paradigm for real-time earthquake detection in order to facilitate rapid and inexpensive natural risk reduction.
Results of meteorological monitoring in Gorny Altai before and after the Chuya earthquake in 2003
NASA Astrophysics Data System (ADS)
Aptikaeva, O. I.; Shitov, A. V.
2014-12-01
We consider the dynamics of some meteorological parameters in Gorny Altai from 2000 to 2011. We analyzed the variations in the meteorological parameters related to the strong Chuya earthquake (September 27, 2003). A number of anomalies were revealed in the time series. Before this strong earthquake, the winter temperatures at the nearest meteorological station to the earthquake source increased by 8-10°C (by 2009 they returned to the mean values), while the air humidity in winter decreased. In the winter of 2002, we observed a long negative anomaly in the time series of the atmospheric pressure. At the same time, the decrease in the released seismic energy was replaced by the tendency to its increase. Using wavelet analysis we revealed the synchronism in the dynamics of the atmospheric parameters, variations in the solar and geomagnetic activities, and geodynamic processes. We also discuss the relationship of the atmospheric and geodynamic processes and the comfort conditions of the population in the climate analyzed here.
Extension of Gutenberg-Richter distribution to MW -1.3, no lower limit in sight
NASA Astrophysics Data System (ADS)
Boettcher, Margaret S.; McGarr, A.; Johnston, Malcolm
2009-05-01
With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude M W -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as M W -3.9 are observed, but we find no evidence that M W -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
Extension of Gutenberg-Richter distribution to Mw -1.3, no lower limit in sight
Boettcher, M.S.; McGarr, A.; Johnston, M.
2009-01-01
[1] With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude Mw -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as Mw -3.9 are observed, but we find no evidence that Mw -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
NASA Astrophysics Data System (ADS)
Laumal, F. E.; Nope, K. B. N.; Peli, Y. S.
2018-01-01
Early warning is a warning mechanism before an actual incident occurs, can be implemented on natural events such as tsunamis or earthquakes. Earthquakes are classified in tectonic and volcanic types depend on the source and nature. The tremor in the form of energy propagates in all directions as Primary and Secondary waves. Primary wave as initial earthquake vibrations propagates longitudinally, while the secondary wave propagates like as a sinusoidal wave after Primary, destructive and as a real earthquake. To process the primary vibration data captured by the earthquake sensor, a network management required client computer to receives primary data from sensors, authenticate and forward to a server computer to set up an early warning system. With the water propagation concept, a method of early warning system has been determined in which some sensors are located on the same line, sending initial vibrations as primary data on the same scale and the server recommended to the alarm sound as an early warning.
NASA Astrophysics Data System (ADS)
Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige
2018-06-01
In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the downdip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multiscale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of the long-duration energy-release regions, matching the large-scale clustering features evidenced from the low-frequency earthquake's activity analysis. Further examination of the two catalogues showed that the extracted short-duration low-frequency earthquakes activity coincides in space, within about 10-15 km distance, with the longer-duration energy sources during the tectonic tremor sequence. This observation provides a potential constraint on the size of the longer-duration energy-radiating source region in relation with the clustering of low-frequency earthquakes activity during the analysed tectonic tremor sequence. We show that advanced statistical network-based methods offer new capabilities for automatic high-resolution detection, location and monitoring of different scale-components of tectonic tremor activity, enriching existing slow earthquakes catalogues. Systematic application of such methods to large continuous data sets will allow imaging the slow transient seismic energy-release activity at higher resolution, and therefore, provide new insights into the underlying multiscale mechanisms of slow earthquakes generation.
NASA Astrophysics Data System (ADS)
Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige
2018-02-01
In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of the long-duration energy-release regions, matching the large-scale clustering features evidenced from the low-frequency earthquake's activity analysis. Further examination of the two catalogues showed that the extracted short-duration low-frequency earthquakes activity coincides in space, within about 10-15 km distance, with the longer-duration energy sources during the tectonic tremor sequence. This observation provides a potential constraint on the size of the longer-duration energy-radiating source region in relation with the clustering of low-frequency earthquakes activity during the analysed tectonic tremor sequence. We show that advanced statistical network-based methods offer new capabilities for automatic high-resolution detection, location and monitoring of different scale-components of tectonic tremor activity, enriching existing slow earthquakes catalogues. Systematic application of such methods to large continuous data sets will allow imaging the slow transient seismic energy-release activity at higher resolution, and therefore, provide new insights into the underlying multi-scale mechanisms of slow earthquakes generation.
A teleseismic analysis of the New Brunswick earthquake of January 9, 1982.
Choy, G.L.; Boatwright, J.; Dewey, J.W.; Sipkin, S.A.
1983-01-01
The analysis of the New Brunswick earthquake of January 9, 1982, has important implications for the evaluation of seismic hazards in eastern North America. Although moderate in size (mb, 5.7), it was well-recorded teleseismically. Source characteristics of this earthquake have been determined from analysis of data that were digitally recorded by the Global Digital Seismography Network. From broadband displacement and velocity records of P waves, we have obtained a dynamic description of the rupture process as well as conventional static properties of the source. The depth of the hypocenter is estimated to be 9km from depth phases. The focal mechanism determined from the broadband data corresponds to predominantly thrust faulting. From the variation in the waveforms the direction of slip is inferred to be updip on a west dipping NNE striking fault plane. The steep dip of the inferred fault plane suggests that the earthquake occurred on a preexisting fault that was at one time a normal fault. From an inversion of body wave pulse durations, the estimated rupture length is 5.5km.-from Authors
NASA Astrophysics Data System (ADS)
Pulido Hernandez, N. E.; Suzuki, W.; Aoi, S.
2014-12-01
A megathrust earthquake occurred in Northern Chile in April 1, 2014, 23:46 (UTC) (Mw 8.2), in a region that had not experienced a major earthquake since the great 1877 (~M8.6) event. This area had been already identified as a mature seismic gap with a strong interseismic coupling inferred from geodetic measurements (Chlieh et al., JGR, 2011 and Metois et al., GJI, 2013). We used 48 components of strong motion records belonging to the IPOC network in Northern Chile to investigate the source process of the M8.2 Pisagua earthquake. Acceleration waveforms were integrated to get velocities and filtered between 0.02 and 0.125 Hz. We assumed a single fault plane segment with an area of 180 km by 135 km, a strike of 357, and a dip of 18 degrees (GCMT). We set the starting point of rupture at the USGS hypocenter (19.610S, 70.769W, depth 25km), and employed a multi-time-window linear waveform inversion method (Hartzell and Heaton, BSSA, 1983), to derive the rupture process of the Pisagua earthquake. Our results show a slip model characterized by one large slip area (asperity) localized 50 km south of the epicenter, a peak slip of 10 m and a total seismic moment of 2.36 x 1021Nm (Mw 8.2). Fault rupture slowly propagated to the south in front of the main asperity for the initial 25 seconds, and broke it by producing a strong acceleration stage. The fault plane rupture velocity was in average 2.9 km/s. Our calculations show an average stress drop of 4.5MPa for the entire fault rupture area and 12MPa for the asperity area. We simulated the near-source strong ground motion records in a broad frequency band (0.1 ~ 20 Hz), to investigate a possible multi-frequency fault rupture process as the one observed in recent mega-thrust earthquakes such as the 2011 Tohoku-oki (M9.0). Acknowledgments Strong motion data was kindly provided by Chile University as well as the IPOC (Integrated Plate boundary Observatory Chile).
On The Computation Of The Best-fit Okada-type Tsunami Source
NASA Astrophysics Data System (ADS)
Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.
2017-12-01
The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.
NASA Astrophysics Data System (ADS)
Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali
2016-08-01
The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.
NASA Astrophysics Data System (ADS)
Gomez-Gonzalez, J. M.; Mellors, R.
2007-05-01
We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.
Application of Seismic Array Processing to Tsunami Early Warning
NASA Astrophysics Data System (ADS)
An, C.; Meng, L.
2015-12-01
Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800 instruments) and the Earthscope USArray Transportable Array (~400 instruments), are established.
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
NASA Astrophysics Data System (ADS)
Chounet, Agnès; Vallée, Martin; Causse, Mathieu; Courboulex, Françoise
2018-05-01
Application of the SCARDEC method provides the apparent source time functions together with seismic moment, depth, and focal mechanism, for most of the recent earthquakes with magnitude larger than 5.6-6. Using this large dataset, we have developed a method to systematically invert for the rupture direction and average rupture velocity Vr, when unilateral rupture propagation dominates. The approach is applied to all the shallow (z < 120 km) earthquakes of the catalog over the 1992-2015 time period. After a careful validation process, rupture properties for a catalog of 96 earthquakes are obtained. The subsequent analysis of this catalog provides several insights about the seismic rupture process. We first report that up-dip ruptures are more abundant than down-dip ruptures for shallow subduction interface earthquakes, which can be understood as a consequence of the material contrast between the slab and the overriding crust. Rupture velocities, which are searched without any a-priori up to the maximal P wave velocity (6000-8000 m/s), are found between 1200 m/s and 4500 m/s. This observation indicates that no earthquakes propagate over long distances with rupture velocity approaching the P wave velocity. Among the 23 ruptures faster than 3100 m/s, we observe both documented supershear ruptures (e.g. the 2001 Kunlun earthquake), and undocumented ruptures that very likely include a supershear phase. We also find that the correlation of Vr with the source duration scaled to the seismic moment (Ts) is very weak. This directly implies that both Ts and Vr are anticorrelated with the stress drop Δσ. This result has implications for the assessment of the peak ground acceleration (PGA) variability. As shown by Causse and Song (2015), an anticorrelation between Δσ and Vr significantly reduces the predicted PGA variability, and brings it closer to the observed variability.
Comprehensive Areal Model of Earthquake-Induced Landslides: Technical Specification and User Guide
Miles, Scott B.; Keefer, David K.
2007-01-01
This report describes the complete design of a comprehensive areal model of earthquakeinduced landslides (CAMEL). This report presents the design process, technical specification of CAMEL. It also provides a guide to using the CAMEL source code and template ESRI ArcGIS map document file for applying CAMEL, both of which can be obtained by contacting the authors. CAMEL is a regional-scale model of earthquake-induced landslide hazard developed using fuzzy logic systems. CAMEL currently estimates areal landslide concentration (number of landslides per square kilometer) of six aggregated types of earthquake-induced landslides - three types each for rock and soil.
Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira
2012-01-01
We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change much for earthquakes rupturing in the same area. The occurrence of microearthquakes in the interseismic period suggests the intrusion of aseismic slip, causing a loading of these patches. We also found that some earthquakes near the centre of the mainshock source area occurred just after the earthquakes at the deeper edge of the mainshock source area. These seismic activities probably indicate episodic aseismic slip migrating from the deeper regions in the mainshock asperity to its centre during interseismic periods. Comparison of the source parameters for the 2001 and 2008 main shocks shows that the seismic moments (1.04 x 1016 Nm and 1.12 x 1016 Nm for the 2008 and 2001 earthquakes, respectively) and source sizes (radius = 570 m and 540 m for the 2008 and 2001 earthquakes, respectively) are comparable. Based on careful phase identification and hypocentre relocation by constraining the hypocentres of other small earthquakes to their precisely located centroids, we found that the hypocentres of the 2001 and 2008 M~ 4.9 events are located in the southeastern part of the mainshock source area. This location does not correspond to either episodic slip area or hypocentres of small earthquakes that occurred during the earthquake cycle.
Updated earthquake catalogue for seismic hazard analysis in Pakistan
NASA Astrophysics Data System (ADS)
Khan, Sarfraz; Waseem, Muhammad; Khan, Muhammad Asif; Ahmed, Waqas
2018-03-01
A reliable and homogenized earthquake catalogue is essential for seismic hazard assessment in any area. This article describes the compilation and processing of an updated earthquake catalogue for Pakistan. The earthquake catalogue compiled in this study for the region (quadrangle bounded by the geographical limits 40-83° N and 20-40° E) includes 36,563 earthquake events, which are reported as 4.0-8.3 moment magnitude (M W) and span from 25 AD to 2016. Relationships are developed between the moment magnitude and body, and surface wave magnitude scales to unify the catalogue in terms of magnitude M W. The catalogue includes earthquakes from Pakistan and neighbouring countries to minimize the effects of geopolitical boundaries in seismic hazard assessment studies. Earthquakes reported by local and international agencies as well as individual catalogues are included. The proposed catalogue is further used to obtain magnitude of completeness after removal of dependent events by using four different algorithms. Finally, seismicity parameters of the seismic sources are reported, and recommendations are made for seismic hazard assessment studies in Pakistan.
NASA Astrophysics Data System (ADS)
Yi, Lei; Xu, Caijun; Wen, Yangmao; Zhang, Xu; Jiang, Guoyan
2018-01-01
The 2016 Ecuador earthquake ruptured the Ecuador-Colombia subduction interface where several historic megathrust earthquakes had occurred. In order to determine a detailed rupture model, Interferometric Synthetic Aperture Radar (InSAR) images and teleseismic data sets were objectively weighted by using a modified Akaika's Bayesian Information Criterion (ABIC) method to jointly invert for the rupture process of the earthquake. In modeling the rupture process, a constrained waveform length method, unlike the traditional subjective selected waveform length method, was used since the lengths of inverted waveforms were strictly constrained by the rupture velocity and rise time (the slip duration time). The optimal rupture velocity and rise time of the earthquake were estimated from grid search, which were determined to be 2.0 km/s and 20 s, respectively. The inverted model shows that the event is dominated by thrust movement and the released moment is 5.75 × 1020 Nm (Mw 7.77). The slip distribution extends southward along the Ecuador coast line in an elongated stripe at a depth between 10 and 25 km. The slip model is composed of two asperities and slipped over 4 m. The source time function is approximate 80 s that separated into two segments corresponding to the two asperities. The small magnitude of the slip occurred in the updip section of the fault plane resulted in small tsunami waves that were verified by observations near the coast. We suggest a possible situation that the rupture zone of the 2016 earthquake is likely not overlapped with that of the 1942 earthquake.
Intraplate earthquakes and the state of stress in oceanic lithosphere
NASA Technical Reports Server (NTRS)
Bergman, Eric A.
1986-01-01
The dominant sources of stress relieved in oceanic intraplate earthquakes are investigated to examine the usefulness of earthquakes as indicators of stress orientation. The primary data for this investigation are the detailed source studies of 58 of the largest of these events, performed with a body-waveform inversion technique of Nabelek (1984). The relationship between the earthquakes and the intraplate stress fields was investigated by studying, the rate of seismic moment release as a function of age, the source mechanisms and tectonic associations of larger events, and the depth-dependence of various source parameters. The results indicate that the earthquake focal mechanisms are empirically reliable indicators of stress, probably reflecting the fact that an earthquake will occur most readily on a fault plane oriented in such a way that the resolved shear stress is maximized while the normal stress across the fault, is minimized.
NASA Astrophysics Data System (ADS)
Letort, J.; Guilhem Trilla, A.; Ford, S. R.; Sèbe, O.; Causse, M.; Cotton, F.; Campillo, M.; Letort, G.
2017-12-01
We constrain the source, depth, and rupture process of the Botswana earthquake of April 3, 2017, as well as its largest aftershock (5 April 2017, Mw 4.5). This earthquake is the largest recorded event (Mw 6.5) in the East African rift system since 1970, making one important case study to better understand source processes in stable continental regions. For the two events an automatic cepstrum analysis (Letort et al., 2015) is first applied on respectively 215 and 219 teleseismic records, in order to detect depth phase arrivals (pP, sP) in the P-coda. Coherent detections of depth phases for different azimuths allow us to estimate the hypocentral depths respectively at 28 and 23 km, suggesting that the events are located in the lower crust. A same cepstrum analysis is conducted on five other earthquakes with mb>4 in this area (from 2002 to 2017), and confirms a deep crustal seismicity cluster (around 20-30 km). The source mechanisms are then characterized using a joint inversion method by fitting both regional long-period surface-waves and teleseismic high-frequency body-waves. Combining regional and teleseismic data (as well as systematic comparisons between theoretical and observed regional surface-waves dispersion curves prior to the inversion) allows us to decrease epistemic uncertainties due to lack of regional data and poor knowledge about the local velocity structure. Focal mechanisms are both constrained as normal faulting with a northwest trending, and hypocentral depths are confirmed at 28 and 24 km. Finally, in order to study the mainshock rupture process, we originally apply a kymograph analysis method (an image processing method, commonly used in the field of cell biology for identifying motions of molecular motors, e.g. Mangeol et al., 2016). Here, the kymograph allows us to better identify high-frequency teleseismic P-arrivals inside the P-coda by tracking both reflected depth phase and direct P-wave arrivals radiated from secondary sources during the faulting process. Secondary P-arrivals are thus identified with a significant azimuthal variation of their arrival times (until 4s), allowing the localization of the source that generated these secondary waves. This analysis shows that the mainshock is probably a mix of at least two events, the second being 20-30 km further northwest along the fault.
An approach to detect afterslips in giant earthquakes in the normal-mode frequency band
NASA Astrophysics Data System (ADS)
Tanimoto, Toshiro; Ji, Chen; Igarashi, Mitsutsugu
2012-08-01
An approach to detect afterslips in the source process of giant earthquakes is presented in the normal-mode frequency band (0.3-2.0 mHz). The method is designed to avoid a potential systematic bias problem in the determination of earthquake moment by a typical normal-mode approach. The source of bias is the uncertainties in Q (modal attenuation parameter) which varies by up to about ±10 per cent among published studies. A choice of Q values within this range affects amplitudes in synthetic seismograms significantly if a long time-series of about 5-7 d is used for analysis. We present an alternative time-domain approach that can reduce this problem by focusing on a shorter time span with a length of about 1 d. Application of this technique to four recent giant earthquakes is presented: (1) the Tohoku, Japan, earthquake of 2011 March 11, (2) the 2010 Maule, Chile earthquake, (3) the 2004 Sumatra-Andaman earthquake and (4) the Solomon earthquake of 2007 April 1. The Global Centroid Moment Tensor (GCMT) solution for the Tohoku earthquake explains the normal-mode frequency band quite well. The analysis for the 2010 Chile earthquake indicates that the moment is about 7-10 per cent higher than the moment determined by its GCMT solution but further analysis shows that there is little evidence of afterslip; the deviation in moment can be explained by an increase of the dip angle from 18° in the GCMT solution to 19°. This may be a simple trade-off problem between the moment and dip angle but it may also be due to a deeper centroid in the normal-mode frequency band data, as a deeper source could have steeper dip angle due to changes in geometry of the Benioff zone. For the 2004 Sumatra-Andaman earthquake, the five point-source solution by Tsai et al. explains most of the signals but a sixth point-source with long duration improves the fit to the normal-mode frequency band data. The 2007 Solomon earthquake shows that the high-frequency part of our analysis (above 1 mHz) is compatible with the GCMT solution but the low-frequency part requires afterslip to explain the increasing amplitude ratios towards lower frequency. The required slip has the moment about 19 per cent of the GCMT solution and the rise time of 260 s. The total moment of these earthquakes are 5.31 × 1022 N m (Tohoku), (1.86-1.96) × 1022 N m (Chile), 1.33 × 1023 N m (Sumatra) and 1.86 × 1021 N m (Solomon). The moment magnitudes are 9.08, 8.78-8.79, 9.35 and 8.11, respectively, using Kanamori's original formula between the moment and the moment magnitude. However, the trade-off problem between the moment and dip angle can modify these estimates for moment up to about 40-50 per cent and the corresponding magnitude ±0.1.
A Closer Look at Recent Deep Mauna Loa Seismicity
NASA Astrophysics Data System (ADS)
Okubo, P. G.; Wolfe, C. J.; Nakata, J. S.; Koyanagi, S. K.; Uribe, J. O.
2005-12-01
In 2002, Mauna Loa Volcano showed signs of reawakening, some 18 years since its last eruption in 1984. First, in April, a brief flurry of microearthquakes occurred at cataloged depths from 25 to 55 km beneath Mauna Loa's summit caldera. Then in May 2002, after the microearthquake swarm had ended, geodetic monitors across Mauna Loa's summit caldera registered a change, from line-length shortening to extension, interpreted as reinflation of a magma body approximately 4 km beneath the volcano's summit. Accordingly, the Hawaiian Volcano Observatory issued advisories related to Mauna Loa's stirring. In July 2004, HVO began to record deep long-period (LP) earthquakes beneath Mauna Loa. Historically, interpretations of such seismicity patterns have associated LP source volumes with magma chambers and magma pathways. Over a few weeks, this seismicity dramatically jumped to levels of several dozen per day. Between the months of July and December 2004, nearly 2000 Mauna Loa LPs were located between roughly 25 km and greater than 60 km depths by HVO seismic analysts. In late December, these earthquakes rather abruptly ceased, and their levels have remained low ever since. We seek a more detailed understanding of how these earthquakes may factor into Mauna Loa's eruptive framework. Given that their first arrivals are typically emergent, hypocentral estimates using only P-wave first-arrival times of LP earthquakes are often marginally constrained. With such hypocentral estimates, it is difficult to establish clear relationships among the earthquake locations themselves, or between the earthquakes and other processes like crustal extension or magma accumulation or withdrawl. Building on earlier applications to deep earthquakes in Hawaii and LP earthquakes beneath Kilauea, we are reexamining this unprecedented Mauna Loa deep seismicity with waveform correlation and precise earthquake relocation techniques. Work to date reveals that, although the waveform correlation coefficients are low, a significant subset of the deep Mauna Loa LPs can be relocated to improve our understanding of the remarkable 2004 swarm. We are currently seeking stronger resolution to determine whether the waveform data are consistent with the vertically extended, conduit-like source distributions suggested by the catalog locations or, alternatively, whether the events are consistent with one or more narrowly extended point sources.
Iceberg capsize hydrodynamics and the source of glacial earthquakes
NASA Astrophysics Data System (ADS)
Kaluzienski, Lynn; Burton, Justin; Cathles, Mac
2014-03-01
Accelerated warming in the past few decades has led to an increase in dramatic, singular mass loss events from the Greenland and Antarctic ice sheets, such as the catastrophic collapse of ice shelves on the western antarctic peninsula, and the calving and subsequent capsize of cubic-kilometer scale icebergs in Greenland's outlet glaciers. The latter has been identified as the source of long-period seismic events classified as glacial earthquakes, which occur most frequently in Greenland's summer months. The ability to partially monitor polar mass loss through the Global Seismographic Network is quite attractive, yet this goal necessitates an accurate model of a source mechanism for glacial earthquakes. In addition, the detailed relationship between iceberg mass, geometry, and the measured seismic signal is complicated by inherent difficulties in collecting field data from remote, ice-choked fjords. To address this, we use a laboratory scale model to measure aspects of the post-fracture calving process not observable in nature. Our results show that the combination of mechanical contact forces and hydrodynamic pressure forces generated by the capsize of an iceberg adjacent to a glacier's terminus produces a dipolar strain which is reminiscent of a single couple seismic source.
Shelly, David R.; Johnson, Kaj M.
2011-01-01
The 2003 magnitude 6.5 San Simeon and the 2004 magnitude 6.0 Parkfield earthquakes induced small, but significant, static stress changes in the lower crust on the central San Andreas fault, where recently detected tectonic tremor sources provide new constraints on deep fault creep processes. We find that these earthquakes affect tremor rates very differently, consistent with their differing transferred static shear stresses. The San Simeon event appears to have cast a "stress shadow" north of Parkfield, where tremor activity was stifled for 3-6 weeks. In contrast, the 2004 Parkfield earthquake dramatically increased tremor activity rates both north and south of Parkfield, allowing us to track deep postseismic slip. Following this event, rates initially increased by up to two orders of magnitude for the relatively shallow tremor sources closest to the rupture, with activity in some sources persisting above background rates for more than a year. We also observe strong depth dependence in tremor recurrence patterns, with shallower sources generally exhibiting larger, less-frequent bursts, possibly signaling a transition toward steady creep with increasing temperature and depth. Copyright 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.
2017-12-01
Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of earthquakes. Finally, we note that the eGf path effect removal will be unsuccessful if earthquakes are too far apart. We therefore calculate relative earthquake locations from our estimated differential P wave arrival times, then we examine how our stress drop estimates vary with inter-earthquake distance.
NASA Astrophysics Data System (ADS)
Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe
2017-01-01
A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic ground motions obtained using the EGF method agree well with the observed motions in terms of acceleration, velocity, and displacement within the frequency range of 0.3-10 Hz. These findings indicate that the 2016 Kumamoto earthquake is a standard event that follows the scaling relationship of crustal earthquakes in Japan.
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Yamashina, T.; Inoue, H.; Toda, S.
2007-12-01
On March 6, 2007, an earthquake doublet occurred around Lake Singkarak, central Sumatra in Indonesia. An earthquake with magnitude (Mw) 6.4 at 03:49 is followed two hours later (05:49) by a similar-size event (Mw 6.3). Lake Singkarak is located between the Sianok and Sumani fault segments of the Sumatran fault system, and is a pull-apart basin formed at the segment boundary. We investigate source processes of the earthquakes using waveform data obtained from JISNET, which is a broad-band seismograph network in Indonesia. We first estimate the centroid source locations and focal mechanisms by the waveform inversion carried out in the frequency domain. Since stations are distributed almost linearly in the NW-SE direction coincident with the Sumatran fault strike direction, the estimated centroid locations are not well resolved especially in the direction orthogonal to the NW-SE direction. If we assume that these earthquakes occurred along the Sumatran fault, the first earthquake is located on the Sumani segment below Lake Singkarak and the second event is located at a few tens of kilometers north of the first event on the Sianok segment. The focal mechanisms of both events point to almost identical right-lateral strike-slip vertical faulting, which is consistent with the geometry of the Sumatran fault system. We next investigate the rupture initiation points using the particle motions of the P-waves of these earthquakes observed at station PPI, which is located about 20 km north of the Lake Singkarak. The initiation point of the first event is estimated in the north of the lake, which corresponds to the northern end of the Sumani segment. The initiation point of the second event is estimated at the southern end of the Sianok segment. The observed maximum amplitudes at stations located in the SE of the source region show larger amplitudes for the first event than those for the second one. On the other hand, the amplitudes at station BSI located in the NW of the source region show larger amplitude for the second event than that for the first one. Since the magnitudes, focal mechanisms, and source locations are almost identical for the two events, the larger amplitudes for the second event at BSI may be due to the effect of rupture directivity. Accordingly, we obtain the following image of source processes of the earthquake doublet: The first event initiated at the segment boundary and its rupture propagated along the Sumani segment to the SW direction. Then, the second event, which may be triggered by the first event, initiated at a location close to the hypocenter of the first event, but its rupture propagated along the Sianok segment to the NE direction, opposite to the first event. It is known that the previous significant seismic activity along the Sianok and Sumani segments occurred in 1926, which was also an earthquake doublet with similar magnitudes to those in 2007. If we assume that the time interval between the earthquake doublets in 1926 and 2007 represents the average recurrence interval and that typical slip in the individual earthquakes is 1 m, we obtain approximately 1 cm/year for a slip rate of the fault segments. Geological features indicate that Lake Singkrak is no more than a few million years old (Sieh and Natawidjaja, 2000, JGR). If the pull-apart basin has been created since a few million years ago with the estimated slip rate of the segments, we obtain roughly 20 km of the total offset on the Sianok and Sumani segments, which is consistent with the observed offset. Our study supports the model of Sieh and Natawidjaja (2000) that the basin continues to be created by dextral slip on the en echelon Sumani and Sianok segments.
Monochromatic body waves excited by great subduction zone earthquakes
NASA Astrophysics Data System (ADS)
Ihmlé, Pierre F.; Madariaga, Raúl
Large quasi-monochromatic body waves were excited by the 1995 Chile Mw=8.1 and by the 1994 Kurile Mw=8.3 events. They are observed on vertical/radial component seismograms following the direct P and Pdiff arrivals, at all azimuths. We devise a slant stack algorithm to characterize the source of the oscillations. This technique aims at locating near-source isotropic scatterers using broadband data from global networks. For both events, we find that the oscillations emanate from the trench. We show that these monochromatic waves are due to localized oscillations of the water column. Their period corresponds to the gravest ID mode of a water layer for vertically traveling compressional waves. We suggest that these monochromatic body waves may yield additional constraints on the source process of great subduction zone earthquakes.
A rapid estimation of near field tsunami run-up
Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie
2015-01-01
Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.
Magma-Tectonic Interactions in the Main Ethiopian Rift; Insights into Rifting Processes
NASA Astrophysics Data System (ADS)
Greenfield, T.; Keir, D.; Tessema, T.; Lloyd, R.; Biggs, J.; Ayele, A.; Kendall, J. M.
2017-12-01
We report observations made around the Bora-Tulu Moye volcanic field, in the Main Ethiopian Rift (MER). A network of seismometers deployed around the volcano for one and a half years reveals the recent state of the volcano. Accurate earthquake locations and focal mechanisms are combined with surface deformation and mapping of faults, fissures and geothermally active areas to reveal the interaction between magmatism and intra-rift faulting. More than 1000 earthquakes are detected and located, making the Bora-Tulu Moye volcanic field one of the most seismically active regions of the MER. Earthquakes are located at depths of less than 5 km below the surface and range between magnitudes of 1.5 - 3.5. Surface deformation of Bora-Tulu Moye is observed using satellite based radar interferometry (InSAR) recorded before and during the seismic deployment. Since 2004, deformation has oscillated between uplift and subsidence centered at the same spatial location but different depths. We constrain the source of the uplift to be at 7 km depth while the source of the subsidence is shallower. Micro-earthquake locations reveal that earthquakes are located around the edge of the observed deformation and record the activation of normal faults orientated at 025°. The spatial link between surface deformation and brittle failure suggest that significant hydrothermal circulation driven by an inflating shallow heat source is inducing brittle failure. Elsewhere, seismicity is focused in areas of significant surface alteration from hydrothermal processes. We use shear wave splitting using local earthquakes to image the stress state of the volcano. A combination of rift parallel and rift-oblique fast directions are observed, indicating the volcano has a significant influence on the crustal stresses. Volcanic activity around Bora-Tulu Moye has migrated eastwards over time, closer to the intra-rift fault system, the Wonji Fault Belt. How and why this occurs relates to changes in the melt supply to the upper crust from depth and has implications for the early stages of rift evolution and for volcanic and tectonic hazard in Ethiopia and rifts generally.
NASA Astrophysics Data System (ADS)
Suzuki, K.; Kamiya, S.; Takahashi, N.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) installed DONET (Dense Oceanfloor Network System for Earthquakes and Tsunamis) off the Kii Peninsula, southwest of Japan, to monitor earthquakes and tsunamis. Stations of DONET1, which are distributed in Kumano-nada, and DONET2, which are distributed off Muroto, were installed by August 2011 and April 2016, respectively. After the installation of all of the 51 stations, DONET was transferred to National Research Institute for Earth Science and Disaster Resilience (NIED). NIED and JAMSTEC have now corroborated in the operation of DONET since April 2016. To investigate the seismicity around the source areas of the 1946 Nankai and the 1944 Tonankai earthquakes, we detected earthquakes from the records of the broadband seismometers installed to DONET. Because DONET stations are apart from land stations, we can detect smaller earthquakes than by using only land stations. It is important for understanding the stress state and seismogenic mechanism to monitoring the spatial-temporal seismicity change. In this study we purpose to evaluate to the seismicity around the source areas of the Nankai and the Tonankai earthquakes by using our earthquake catalogue. The frequency-magnitude relationships of earthquakes in the areas of DONET1&2 had an almost constant slope of about -1 for earthquakes of ML larger than 1.5 and 2.5, satisfying the Gutenberg-Richter law, and the slope of smaller earthquakes approached 0, reflecting the detection limits. While the most of the earthquakes occurred in the aftershock area of the 2004 off the Kii Peninsula earthquakes, very limited activity was detected in the source region of the Nankai and Tonankai earthquake except for the large earthquake (MJMA = 6.5) on 1st April 2016 and its aftershocks. We will evaluate the detection limit of the earthquake in more detail and investigate the spatial-temporal seismicity change with waiting the data store.
NASA Astrophysics Data System (ADS)
Pararas-Carayannis, George
2014-12-01
The great Tohoku-Oki earthquake of March 11, 2011 generated a very destructive and anomalously high tsunami. To understand its source mechanism, an examination was undertaken of the seismotectonics of the region and of the earthquake's focal mechanism, energy release, rupture patterns and spatial and temporal sequencing and clustering of major aftershocks. It was determined that the great tsunami resulted from a combination of crustal deformations of the ocean floor due to up-thrust tectonic motions, augmented by additional uplift due to the quake's slow and long rupturing process, as well as to large coseismic lateral movements which compressed and deformed the compacted sediments along the accretionary prism of the overriding plane. The deformation occurred randomly and non-uniformly along parallel normal faults and along oblique, en-echelon faults to the earthquake's overall rupture direction—the latter failing in a sequential bookshelf manner with variable slip angles. As the 1992 Nicaragua and the 2004 Sumatra earthquakes demonstrated, such bookshelf failures of sedimentary layers could contribute to anomalously high tsunamis. As with the 1896 tsunami, additional ocean floor deformation and uplift of the sediments was responsible for the higher waves generated by the 2011 earthquake. The efficiency of tsunami generation was greater along the shallow eastern segment of the fault off the Miyagi Prefecture where most of the energy release of the earthquake and the deformations occurred, while the segment off the Ibaraki Prefecture—where the rupture process was rapid—released less seismic energy, resulted in less compaction and deformation of sedimentary layers and thus to a tsunami of lesser offshore height. The greater tsunamigenic efficiency of the 2011 earthquake and high degree of the tsunami's destructiveness along Honshu's coastlines resulted from vertical crustal displacements of more than 10 m due to up-thrust faulting and from lateral compression and folding of sedimentary layers in an east-southeast direction which contributed additional uplift estimated at about 7 m—mainly along the leading segment of the accretionary prism of the overriding tectonic plate.
NASA Astrophysics Data System (ADS)
Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie
2018-02-01
The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m (M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.
NASA Astrophysics Data System (ADS)
Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie
2018-05-01
The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m ( M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.
Tilt precursors before earthquakes on the San Andreas fault, California
Johnston, M.J.S.; Mortensen, C.E.
1974-01-01
An array of 14 biaxial shallow-borehole tiltmeters (at 10-7 radian sensitivity) has been installed along 85 kilometers of the San Andreas fault during the past year. Earthquake-related changes in tilt have been simultaneously observed on up to four independent instruments. At earthquake distances greater than 10 earthquake source dimensions, there are few clear indications of tilt change. For the four instruments with the longest records (>10 months), 26 earthquakes have occurred since July 1973 with at least one instrument closer than 10 source dimensions and 8 earthquakes with more than one instrument within that distance. Precursors in tilt direction have been observed before more than 10 earthquakes or groups of earthquakes, and no similar effect has yet been seen without the occurrence of an earthquake.
Earthquakes and strain in subhorizontal slabs
NASA Astrophysics Data System (ADS)
Brudzinski, Michael R.; Chen, Wang-Ping
2005-08-01
Using an extensive database of fault plane solutions and precise locations of hypocenters, we show that the classic patterns of downdip extension (DDE) or downdip compression (DDC) in subduction zones deteriorate when the dip of the slab is less than about 20°. This result is depth-independent, demonstrated by both intermediate-focus (depths from 70 to 300 km) and deep-focus (depths greater than 300 km) earthquakes. The absence of pattern in seismic strain in subhorizontal slabs also occurs locally over scales of about 10 km, as evident from a detailed analysis of a large (Mw 7.1) earthquake sequence beneath Fiji. Following the paradigm that a uniform strain of DDE/DDC results from sinking of the cold, dense slab as it encounters resistance from the highly viscous mantle at depth, breakdown of DDE/DDC in subhorizontal slabs reflects waning negative buoyancy ("slab pull") in the downdip direction. Our results place a constraint on the magnitude of slab pull that is required to dominate over localized sources of stress and to align seismic strain release in dipping slabs. Under the condition of a vanishing slab pull, eliminating the only obvious source of regional stress, the abundance of earthquakes in subhorizontal slabs indicates that a locally variable source of stress is both necessary and sufficient to sustain the accumulation of elastic strain required to generate intermediate- and deep-focus seismicity. Evidence is growing that the process of seismogenesis under high pressures, including localized sources of stress, is tied to the presence of petrologic anomalies.
Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.
2012-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or historical earthquake source area is also applicable for the construction of SMGA settings for strong ground motion prediction for future earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr; Türker, Tuğba, E-mail: tturker@ktu.edu.tr
The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Ağrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Ağrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Ağrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focalmore » mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Ağrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Ağrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158 years, 6.7 magnitude was in 70 years, 6.2 magnitude was in 31 years, 5.7 magnitude was in 13 years, 5.2 magnitude was in 6 years.« less
NASA Astrophysics Data System (ADS)
Bayrak, Yusuf; Türker, Tuǧba
2016-04-01
The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Aǧrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Aǧrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Aǧrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focal mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Aǧrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Aǧrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158 years, 6.7 magnitude was in 70 years, 6.2 magnitude was in 31 years, 5.7 magnitude was in 13 years, 5.2 magnitude was in 6 years.
Geist, Eric L.
2014-01-01
Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.
NASA Astrophysics Data System (ADS)
Bai, L.; Mori, J. J.
2016-12-01
The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.
NASA Astrophysics Data System (ADS)
Gümüş, Ayla; Yalım, Hüseyin Ali
2018-02-01
Radon emanation occurs all the rocks and earth containing uranium element. Anomalies in radon concentrations before earthquakes are observed in fault lines, geothermal sources, uranium deposits, volcanic movements. The aim of this study is to investigate the relationship between the radon anomalies in water resources and the radial distances of the sources to the earthquake center. For this purpose, radon concentrations of 9 different deep water sources near Akşehir fault line were determined by taking samples with monthly periods for two years. The relationship between the radon anomalies and the radial distances of the sources to the earthquake center was obtained for the sources.
Characteristics of broadband slow earthquakes explained by a Brownian model
NASA Astrophysics Data System (ADS)
Ide, S.; Takeo, A.
2017-12-01
Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic, rather than random (Rubin and Armbruster, 2013; Bostock et al., 2015). Since even spatially characteristic source must be activated randomly in time, moment release from these sources are compatible to the fluctuation in BSE model. Therefore, BSE model contains the model of Gomberg et al. (2016), which suggests that the cluster of LFEs makes VLF signals, as a special case.
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Glennie, C. L.; Brooks, B. A.; Hauser, D. L.; Ericksen, T.; Boatwright, J.; Rosinski, A.; Dawson, T. E.; Mccrink, T. P.; Mardock, D. K.; Hoirup, D. F., Jr.; Bray, J.
2014-12-01
Pre-earthquake airborne LiDAR coverage exists for the area impacted by the M 6.0 South Napa earthquake. The Napa watershed data set was acquired in 2003, and data sets were acquired in other portions of the impacted area in 2007, 2010 and 2014. The pre-earthquake data are being assessed and are of variable quality and point density. Following the earthquake, a coalition was formed to enable rapid acquisition of post-earthquake LiDAR. Coordination of this coalition took place through the California Earthquake Clearinghouse; consequently, a commercial contract was organized by Department of Water Resources that allowed for the main fault rupture and damaged Browns Valley area to be covered 16 days after the earthquake at a density of 20 points per square meter over a 20 square kilometer area. Along with the airborne LiDAR, aerial imagery was acquired and will be processed to form an orthomosaic using the LiDAR-derived DEM. The 'Phase I' airborne data were acquired using an Optech Orion M300 scanner, an Applanix 200 GPS-IMU, and a DiMac ultralight medium format camera by Towill. These new data, once delivered, will be differenced against the pre-earthquake data sets using a newly developed algorithm for point cloud matching, which is improved over prior methods by accounting for scan geometry error sources. Proposed additional 'Phase II' coverage would allow repeat-pass, post-earthquake coverage of the same area of interest as in Phase I, as well as an addition of up to 4,150 square kilometers that would potentially allow for differential LiDAR assessment of levee and bridge impacts at a greater distance from the earthquake source. Levee damage was reported up to 30 km away from the epicenter, and proposed LiDAR coverage would extend up to 50 km away and cover important critical lifeline infrastructure in the western Sacramento River delta, as well as providing full post-earthquake repeat-pass coverage of the Napa watershed to study transient deformation.
Updating the USGS seismic hazard maps for Alaska
Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.
2015-01-01
The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.
High Attenuation Rate for Shallow, Small Earthquakes in Japan
NASA Astrophysics Data System (ADS)
Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe
2017-09-01
We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.
NASA Astrophysics Data System (ADS)
Yin, Lucy; Andrews, Jennifer; Heaton, Thomas
2018-05-01
Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
Properties of the seismic nucleation phase
Beroza, G.C.; Ellsworth, W.L.
1996-01-01
Near-source observations show that earthquakes begin abruptly at the P-wave arrival, but that this beginning is weak, with a low moment rate relative to the rest of the main shock. We term this initial phase of low moment rate the seismic nucleation phase. We have observed the seismic nucleation phase for a set of 48 earthquakes ranging in magnitude from 1.1-8.1. The size and duration of the seismic nucleation phase scale with the total seismic moment of the earthquake, suggesting that the process responsible for the seismic nucleation phase carries information about the eventual size of the earthquake. The seismic nucleation phase is characteristically followed by quadratic growth in the moment rate, consistent with self-similar rupture at constant stress drop. In this paper we quantify the properties of the seismic nucleation phase and offer several possible explanations for it.
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Yagi, Y.
2017-12-01
A seismic source model for the Mw 8.1 2017 Chiapas, Mexico, earthquake was constructed by kinematic waveform inversion using globally observed teleseismic waveforms, suggesting that the earthquake was a normal-faulting event on a steeply dipping plane, with the major slip concentrated around a relatively shallow depth of 28 km. The modeled rupture evolution showed unilateral, downdip propagation northwestward from the hypocenter, and the downdip width of the main rupture was restricted to less than 30 km below the slab interface, suggesting that the downdip extensional stresses due to the slab bending were the primary cause of the earthquake. The rupture front abruptly decelerated at the northwestern end of the main rupture where it intersected the subducting Tehuantepec Fracture Zone, suggesting that the fracture zone may have inhibited further rupture propagation.
Revisiting Notable Earthquakes and Seismic Patterns of the Past Decade in Alaska
NASA Astrophysics Data System (ADS)
Ruppert, N. A.; Macpherson, K. A.; Holtkamp, S. G.
2015-12-01
Alaska, the most seismically active region of the United States, has produced five earthquakes with magnitudes greater than seven since 2005. The 2007 M7.2 and 2013 M7.0 Andreanof Islands earthquakes were representative of the most common source of significant seismic activity in the region, the Alaska-Aleutian megathrust. The 2013 M7.5 Craig earthquake, a strike-slip event on the Queen-Charlotte fault, occurred along the transform plate boundary in southeast Alaska. The largest earthquake of the past decade, the 2014 M7.9 Little Sitkin event in the western Aleutians, occurred at an intermediate depth and ruptured along a gently dipping fault through nearly the entire thickness of the subducted Pacific plate. Along with these major earthquakes, the Alaska Earthquake Center reported over 250,000 seismic events in the state over the last decade, and its earthquake catalog surpassed 500,000 events in mid-2015. Improvements in monitoring networks and processing techniques allowed an unprecedented glimpse into earthquake patterns in Alaska. Some notable recent earthquake sequences include the 2008 Kasatochi eruption, the 2006-2008 M6+ crustal earthquakes in the central and western Aleutians, the 2010 and 2015 Bering Sea earthquakes, the 2014 Noatak swarm, and the 2014 Minto earthquake sequence. In 2013, the Earthscope USArray project made its way into Alaska. There are now almost 40 new Transportable Array stations in Alaska along with over 20 upgraded sites. This project is changing the earthquake-monitoring scene in Alaska, lowering magnitude of completeness across large, newly instrumented parts of the state.
NASA Astrophysics Data System (ADS)
Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian
2017-11-01
Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.
Earthquake Forecasting System in Italy
NASA Astrophysics Data System (ADS)
Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.
2017-12-01
In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).
NASA Astrophysics Data System (ADS)
Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang
2017-05-01
In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.
Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction
NASA Astrophysics Data System (ADS)
Allen, A. A.; Chen, X.
2017-12-01
The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes within the array.
Overview of seismic potential in the central and eastern United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweig, E.S.
1995-12-31
The seismic potential of any region can be framed in terms the locations of source zones, the frequency of earthquake occurrence for each source, and the maximum size earthquake that can be expect from each source. As delineated by modern and historical seismicity, the most important seismic source zones affecting the eastern United States include the New Madrid and Wabash Valley seismic zones of the central U.S., the southern Appalachians and Charleston, South Carolina, areas in the southeast, and the northern Appalachians and Adirondacks in the northeast. The most prominant of these in terms of current seismicity and historical seismicmore » moment release in the New Madrid seismic zone, which produced three earthquakes of moment magnitude {ge} 8 in 1811 and 1812. The frequency of earthquake recurrence can be examined using the instrumental record, the historical record, and the geological record. Each record covers a unique time period and has a different scale of temporal resolution and completeness of the data set. The Wabash Valley is an example where the long-term geological record indicates a greater potential than the instrumental and historical records. This points to the need to examine all of the evidence in any region in order to obtain a credible estimates of earthquake hazards. Although earthquake hazards may be dominated by mid-magnitude 6 earthquakes within the mapped seismic source zones, the 1994 Northridge, California, earthquake is just the most recent example of the danger of assuming future events will occur on faults known to have had past events and how destructive such an earthquake can be.« less
NASA Astrophysics Data System (ADS)
Asano, K.
2017-12-01
An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling relationship for past plate-boundary earthquakes along the Japan trench, northeast Japan. This finding implies that the source characteristics of plate-boundary events in the Nankai trough are different from those in the Japan Trench, and it could be important information to consider regional variation in ground motion prediction.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
Systematic observations of the slip pulse properties of large earthquake ruptures
Melgar, Diego; Hayes, Gavin
2017-01-01
In earthquake dynamics there are two end member models of rupture: propagating cracks and self-healing pulses. These arise due to different properties of faults and have implications for seismic hazard; rupture mode controls near-field strong ground motions. Past studies favor the pulse-like mode of rupture; however, due to a variety of limitations, it has proven difficult to systematically establish their kinematic properties. Here we synthesize observations from a database of >150 rupture models of earthquakes spanning M7–M9 processed in a uniform manner and show the magnitude scaling properties of these slip pulses indicates self-similarity. Further, we find that large and very large events are statistically distinguishable relatively early (at ~15 s) in the rupture process. This suggests that with dense regional geophysical networks strong ground motions from a large rupture can be identified before their onset across the source region.
NASA Astrophysics Data System (ADS)
Aochi, Hideo
2014-05-01
The Marmara region (Turkey) along the North Anatolian fault is known as a high potential of large earthquakes in the next decades. For the purpose of seismic hazard/risk evaluation, kinematic and dynamic source models have been proposed (e.g. Oglesby and Mai, GJI, 2012). In general, the simulated earthquake scenarios depend on the hypothesis and cannot be verified before the expected earthquake. We then introduce a probabilistic insight to give the initial/boundary conditions to statistically analyze the simulated scenarios. We prepare different fault geometry models, tectonic loading and hypocenter locations. We keep the same framework of the simulation procedure as the dynamic rupture process of the adjacent 1999 Izmit earthquake (Aochi and Madariaga, BSSA, 2003), as the previous models were able to reproduce the seismological/geodetic aspects of the event. Irregularities in fault geometry play a significant role to control the rupture progress, and a relatively large change in geometry may work as barriers. The variety of the simulate earthquake scenarios should be useful for estimating the variety of the expected ground motion.
Analysis of post-earthquake landslide activity and geo-environmental effects
NASA Astrophysics Data System (ADS)
Tang, Chenxiao; van Westen, Cees; Jetten, Victor
2014-05-01
Large earthquakes can cause huge losses to human society, due to ground shaking, fault rupture and due to the high density of co-seismic landslides that can be triggered in mountainous areas. In areas that have been affected by such large earthquakes, the threat of landslides continues also after the earthquake, as the co-seismic landslides may be reactivated by high intensity rainfall events. Earthquakes create Huge amount of landslide materials remain on the slopes, leading to a high frequency of landslides and debris flows after earthquakes which threaten lives and create great difficulties in post-seismic reconstruction in the earthquake-hit regions. Without critical information such as the frequency and magnitude of landslides after a major earthquake, reconstruction planning and hazard mitigation works appear to be difficult. The area hit by Mw 7.9 Wenchuan earthquake in 2008, Sichuan province, China, shows some typical examples of bad reconstruction planning due to lack of information: huge debris flows destroyed several re-constructed settlements. This research aim to analyze the decay in post-seismic landslide activity in areas that have been hit by a major earthquake. The areas hit by the 2008 Wenchuan earthquake will be taken a study area. The study will analyze the factors that control post-earthquake landslide activity through the quantification of the landslide volume changes well as through numerical simulation of their initiation process, to obtain a better understanding of the potential threat of post-earthquake landslide as a basis for mitigation planning. The research will make use of high-resolution stereo satellite images, UAV and Terrestrial Laser Scanning(TLS) to obtain multi-temporal DEM to monitor the change of loose sediments and post-seismic landslide activities. A debris flow initiation model that incorporates the volume of source materials, vegetation re-growth, and intensity-duration of the triggering precipitation, and that evaluates different initiation mechanisms such as erosion and landslide reactivation will be developed. The developed initiation model will be integrated with run-out model to simulate the dynamic process of post-earthquake debris flows in the study area for a future period and make a prediction about the decay of landslide activity in future.
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant
2012-01-01
QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders.
Earthquake-Induced Building Damage Assessment Based on SAR Correlation and Texture
NASA Astrophysics Data System (ADS)
Gong, Lixia; Li, Qiang; Zhang, Jingfa
2016-08-01
Comparing with optical Remote Sensing, the Synthetic Aperture Radar (SAR) has unique advantages as applied to seismic hazard monitoring and evaluation. SAR can be helpful in the whole process of after an earthquake, which can be divided into three stages. On the first stage, pre-disaster imagery provides history information of the attacked area. On the mid-term stage, up-to-date thematic maps are provided for disaster relief. On the later stage, information is provided to assist secondary disaster monitoring, post- disaster assessment and reconstruction second stage. In recent years, SAR has become an important data source of earthquake damage analysis and evaluation.Correlation between pre- and post-event SAR images is considered to be related with building damage. There will be a correlation decrease when the building collapsed in a shock. Whereas correlation decrease does not definitely indicate building changes. Correlation is also affected by perpendicular baseline, the ground coverage type, atmospheric change and other natural conditions, data processing and other factors. Building samples in the earthquake are used to discriminate the relation between damage degree and SAR correlation.
Earthquake processes in the Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, region 1954-1959
NASA Astrophysics Data System (ADS)
Doser, Diane I.
1986-11-01
The 1954 Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, sequence produced the most extensive pattern of surface faults in the intermountain region in historic time. Five earthquakes of M>6.0 occurred during the first 6 months of the sequence, including the December 16, 1954, Fairview Peak (M = 7.1) and Dixie Valley (M = 6.8) earthquakes. Three 5.5≤M≤6.5 earthquakes occurred in the region in 1959, but none exhibited surface faulting. The results of the modeling suggest that the M>6.5 earthquakes of this sequence are complex events best fit by multiple source-time functions. Although the observed surface displacements for the July and August 1954 events showed only dip-slip motion, the fault plane solutions and waveform modeling suggest the earthquakes had significant components of right-lateral strike-slip motion (rakes of -135° to -145°). All of the earthquakes occurred along high-angle faults with dips of 40° to 70°. Seismic moments for individual subevents of the sequence range from 8.0 × 1017 to 2.5 × 1019 N m. Stress drops for the subevents, including the Fairview Peak subevents, were between 0.7 and 6.0 MPa.
NASA Astrophysics Data System (ADS)
Indah, F. P.; Syafriani, S.; Andiyansyah, Z. S.
2018-04-01
Sumatra is in an active subduction zone between the indo-australian plate and the eurasian plate and is located at a fault along the sumatra fault so that sumatra is vulnerable to earthquakes. One of the ways to find out the cause of earthquake can be done by identifying the type of earthquake-causing faults based on earthquake of focal mechanism. The data used to identify the type of fault cause of earthquake is the earth tensor moment data which is sourced from global cmt period 1976-2016. The data used in this research using magnitude m ≥ 6 sr. This research uses gmt software (generic mapping tolls) to describe the form of fault. From the research result, it is found that the characteristics of fault field that formed in every region in sumatera island based on data processing and data of earthquake history of 1976-2016 period that the type of fault in sumatera fault is strike slip, fault type in mentawai fault is reverse fault (rising faults) and dip-slip, while the fault type in the subduction zone is dip-slip.
Pre-earthquake magnetic pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Heraud, J.; Freund, F.
2015-08-01
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
NASA Astrophysics Data System (ADS)
Puangjaktha, P.; Pailoplee, S.
2018-04-01
In order to examine the precursory seismic quiescence of upcoming hazardous earthquakes, the seismicity data available in the vicinity of the Thailand-Laos-Myanmar borders was analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 parameters, those of r0 = 120 km and t0 = 2 yr yielded reasonable estimates of the anomalous RTL scores, in both temporal variation and spatial distribution, of a few years prior to five out of eight strong-to-major recognized earthquakes. Statistical evaluation of both the correlation coefficient and stochastic process for the RTL were checked and revealed that the RTL score obtained here excluded artificial or random phenomena. Therefore, the prospective earthquake sources mentioned here should be recognized and effective mitigation plans should be provided.
Detection of postseismic fault-zone collapse following the Landers earthquake
Massonnet, D.; Thatcher, W.; Vadon, H.
1996-01-01
Stress changes caused by fault movement in an earthquake induce transient aseismic crustal movements in the earthquake source region that continue for months to decades following large events. These motions reflect aseismic adjustments of the fault zone and/or bulk deformation of the surroundings in response to applied stresses, and supply information regarding the inelastic behaviour of the Earth's crust. These processes are imperfectly understood because it is difficult to infer what occurs at depth using only surface measurements, which are in general poorly sampled. Here we push satellite radar interferometry to near its typical artefact level, to obtain a map of the postseismic deformation field in the three years following the 28 June 1992 Landers, California earthquake. From the map, we deduce two distinct types of deformation: afterslip at depth on the fault that ruptured in the earthquake, and shortening normal to the fault zone. The latter movement may reflect the closure of dilatant cracks and fluid expulsion from a transiently over-pressured fault zone.
Seismic Window Selection and Misfit Measurements for Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Lei, W.; Bozdag, E.; Lefebvre, M.; Podhorszki, N.; Smith, J. A.; Tromp, J.
2013-12-01
Global Adjoint Tomography requires fast parallel processing of large datasets. After obtaing the preprocessed observed and synthetic seismograms, we use the open source software packages FLEXWIN (Maggi et al. 2007) to select time windows and MEASURE_ADJ to make measurements. These measurements define adjoint sources for data assimilation. Previous versions of these tools work on a pair of SAC files---observed and synthetic seismic data for the same component and station, and loop over all seismic records associated with one earthquake. Given the large number of stations and earthquakes, the frequent read and write operations create severe I/O bottlenecks on modern computing platforms. We present new versions of these tools utilizing a new seismic data format, namely the Adaptive Seismic Data Format(ASDF). This new format shows superior scalability for applications on high-performance computers and accommodates various types of data, including earthquake, industry and seismic interferometry datasets. ASDF also provides user-friendly APIs, which can be easily integrated into the adjoint tomography workflow and combined with other data processing tools. In addition to solving the I/O bottleneck, we are making several improvements to these tools. For example, FLEXWIN is tuned to select windows for different types of earthquakes. To capture their distinct features, we categorize earthquakes by their depths and frequency bands. Moreover, instead of only picking phases between the first P arrival and the surface-wave arrivals, our aim is to select and assimilate many other later prominent phases in adjoint tomography. For example, in the body-wave band (17 s - 60 s), we include SKS, sSKS and their multiple, while in the surface-wave band (60 s - 120 s) we incorporate major-arc surface waves.
Ruppert, Natalia G.; Prejean, Stephanie G.; Hansen, Roger A.
2011-01-01
An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field.
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.
2016-12-01
Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.
Mexican Earthquakes and Tsunamis Catalog Reviewed
NASA Astrophysics Data System (ADS)
Ramirez-Herrera, M. T.; Castillo-Aja, R.
2015-12-01
Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and tsunami findings that, so far, we have achieved.
Local observations of the onset of a large earthquake: 28 June 1992 Landers, California
Abercrombie, Richael; Mori, Jim
1994-01-01
The Landers earthquake (MW 7.3) of 28 June 1992 had a very emergent onset. The first large amplitude arrivals are delayed by about 3 sec with respect to the origin time, and are preceded by smaller-scale slip. Other large earthquakes have been observed to have similar emergent onsets, but the Landers event is one of the first to be well recorded on nearby stations. We used these recordings to investigate the spatial relationship between the hypocenter and the onset of the large energy release, and to determine the slip function of the 3-sec nucleation process. Relative location of the onset of the large energy release with respect to the initial hypocenter indicates its source was between 1 and 4 km north of the hypocenter and delayed by approximately 2.5 sec. Three-station array analysis of the P wave shows that the large amplitude onset arrives with a faster apparent velocity compared to the first arrivals, indicating that the large amplitude source was several kilometers deeper than the initial onset. An ML 2.8 foreshock, located close to the hypocenter, was used as an empirical Green's function to correct for path and site effects from the first 3 sec of the mainshock seismogram. The resultant deconvolution produced a slip function that showed two subevents preceding the main energy release, an MW4.4 followed by an MW 5.6. These subevents do not appear anomalous in comparison to simple moderate-sized earthquakes, suggesting that they were normal events which just triggered or grew into a much larger earthquake. If small and moderate-sized earthquakes commonly “detonate” much larger events, this implies that the dynamic stresses during earthquake rupture are at least as important as long-term static stresses in causing earthquakes, and the prospects of reliable earthquake prediction from premonitory phenomena are not improved.
The Near-Source Intensity Distribution for the August 24, 2014, South Napa Earthquake
NASA Astrophysics Data System (ADS)
Boatwright, J.; Pickering, A.; Blair, J. L.
2016-12-01
The 2014 Mw=6.0 South Napa earthquake was the largest and most damaging earthquake to occur in the Bay Area since the 1989 Mw=6.9 Loma Prieta earthquake. The City of Napa estimated that the earthquake caused 300 million damage to homes and commercial properties and 58 million to public infrastructure. Over 41,000 reports were entered on the USGS "Did You Feel It?" (DYFI) website: 730 of these reports were located within 15 km of the rupture. Unfortunately, very few geocoded intensities were obtained immediately west and north of the rupture area. In the weeks following the earthquake, we conducted an intensity survey focused on areas poorly sampled by the DYFI reports. 75 sites were surveyed within 15 km of the earthquake rupture. In addition, we checked and manually geocoded many of the DYFI reports, locating 245 reports within 15 km of the rupture that the automated DYFI processing failed to geocode. We combine the survey sites and the newly geocoded DYFI reports with the original geocoded DYFI reports to map and contour the near-source shaking intensity. In addition to imaging the strong shaking (MMI 7.0-8.0) in the City of Napa, we find an area of very strong shaking (MMI 7.5-8.0) to the northwest of the earthquake rupture. This area, marked by ground cracks, damage to modern wood-frame buildings, and reports of people knocked down, coincides with the directivity expected for rupture to the northwest and up dip. The intensities from the survey sites are consistent with the intensities from the DYFI reports, but are much less variable. For DYFI intensities MMI 4-6, this variability could be derived from the 3:20 AM occurrence of the earthquake: some of the effects that the DYFI questionnaire uses to assign these intensities (objects swaying, bushes and trees shaken) cannot be observed in the dark.
NASA Astrophysics Data System (ADS)
Obana, Koichiro; Nakamura, Yasuyuki; Fujie, Gou; Kodaira, Shuichi; Kaiho, Yuka; Yamamoto, Yojiro; Miura, Seiichi
2018-03-01
In the northern part of the Japan Trench, the 1933 Showa-Sanriku earthquake (Mw 8.4), an outer-trench, normal-faulting earthquake, occurred 37 yr after the 1896 Meiji-Sanriku tsunami earthquake (Mw 8.0), a shallow, near-trench, plate-interface rupture. Tsunamis generated by both earthquakes caused severe damage along the Sanriku coast. Precise locations of earthquakes in the source areas of the 1896 and 1933 earthquakes have not previously been obtained because they occurred at considerable distances from the coast in deep water beyond the maximum operational depth of conventional ocean bottom seismographs (OBSs). In 2015, we incorporated OBSs designed for operation in deep water (ultradeep OBSs) in an OBS array during two months of seismic observations in the source areas of the 1896 and 1933 Sanriku earthquakes to investigate the relationship of seismicity there to outer-rise normal-faulting earthquakes and near-trench tsunami earthquakes. Our analysis showed that seismicity during our observation period occurred along three roughly linear trench-parallel trends in the outer-trench region. Seismic activity along these trends likely corresponds to aftershocks of the 1933 Showa-Sanriku earthquake and the Mw 7.4 normal-faulting earthquake that occurred 40 min after the 2011 Tohoku-Oki earthquake. Furthermore, changes of the clarity of reflections from the oceanic Moho on seismic reflection profiles and low-velocity anomalies within the oceanic mantle were observed near the linear trends of the seismicity. The focal mechanisms we determined indicate that an extensional stress regime extends to about 40 km depth, below which the stress regime is compressional. These observations suggest that rupture during the 1933 Showa-Sanriku earthquake did not extend to the base of the oceanic lithosphere and that compound rupture of multiple or segmented faults is a more plausible explanation for that earthquake. The source area of the 1896 Meiji-Sanriku tsunami earthquake is characterized by an aseismic region landward of the trench axis. Spatial heterogeneity of seismicity and crustal structure might indicate the near-trench faults that could lead to future hazardous events such as the 1896 and 1933 Sanriku earthquakes, and should be taken into account in assessment of tsunami hazards related to large near-trench earthquakes.
NASA Astrophysics Data System (ADS)
Di Giacomo, Domenico; Harris, James; Villaseñor, Antonio; Storchak, Dmitry A.; Engdahl, E. Robert; Lee, William H. K.
2015-02-01
In order to produce a new global reference earthquake catalogue based on instrumental data covering the last 100+ years of global earthquakes, we collected, digitized and processed an unprecedented amount of printed early instrumental seismological bulletins with fundamental parametric data for relocating and reassessing the magnitude of earthquakes that occurred in the period between 1904 and 1970. This effort was necessary in order to produce an earthquake catalogue with locations and magnitudes as homogeneous as possible. The parametric data obtained and processed during this work fills a large gap in electronic bulletin data availability. This new dataset complements the data publicly available in the International Seismological Centre (ISC) Bulletin starting in 1964. With respect to the amplitude-period data necessary to re-compute magnitude, we searched through the global collection of printed bulletins stored at the ISC and entered relevant station parametric data into the database. As a result, over 110,000 surface and body-wave amplitude-period pairs for re-computing standard magnitudes MS and mb were added to the ISC database. To facilitate earthquake relocation, different sources have been used to retrieve body-wave arrival times. These were entered into the database using optical character recognition methods (International Seismological Summary, 1918-1959) or manually (e.g., British Association for the Advancement of Science, 1913-1917). In total, ∼1,000,000 phase arrival times were added to the ISC database for large earthquakes that occurred in the time interval 1904-1970. The selection of earthquakes for which data was added depends on time period and magnitude: for the early years of last century (until 1917) only very large earthquakes were selected for processing (M ⩾ 7.5), whereas in the periods 1918-1959 and 1960-2009 the magnitude thresholds are 6.25 and 5.5, respectively. Such a selection was mainly dictated by limitations in time and funding. Although the newly available parametric data is only a subset of the station data available in the printed bulletins, its electronic availability will be important for any future study of earthquakes that occurred during the early instrumental period.
Leveraging geodetic data to reduce losses from earthquakes
Murray, Jessica R.; Roeloffs, Evelyn A.; Brooks, Benjamin A.; Langbein, John O.; Leith, William S.; Minson, Sarah E.; Svarc, Jerry L.; Thatcher, Wayne R.
2018-04-23
Seismic hazard assessments that are based on a variety of data and the best available science, coupled with rapid synthesis of real-time information from continuous monitoring networks to guide post-earthquake response, form a solid foundation for effective earthquake loss reduction. With this in mind, the Earthquake Hazards Program (EHP) of the U.S. Geological Survey (USGS) Natural Hazards Mission Area (NHMA) engages in a variety of undertakings, both established and emergent, in order to provide high quality products that enable stakeholders to take action in advance of and in response to earthquakes. Examples include the National Seismic Hazard Model (NSHM), development of tools for improved situational awareness such as earthquake early warning (EEW) and operational earthquake forecasting (OEF), research about induced seismicity, and new efforts to advance comprehensive subduction zone science and monitoring. Geodetic observations provide unique and complementary information directly relevant to advancing many aspects of these efforts (fig. 1). EHP scientists have long leveraged geodetic data for a range of influential studies, and they continue to develop innovative observation and analysis methods that push the boundaries of the field of geodesy as applied to natural hazards research. Given the ongoing, rapid improvement in availability, variety, and precision of geodetic measurements, considering ways to fully utilize this observational resource for earthquake loss reduction is timely and essential. This report presents strategies, and the underlying scientific rationale, by which the EHP could achieve the following outcomes: The EHP is an authoritative source for the interpretation of geodetic data and its use for earthquake loss reduction throughout the United States and its territories.The USGS consistently provides timely, high quality geodetic data to stakeholders.Significant earthquakes are better characterized by incorporating geodetic data into USGS event response products and by expanded use of geodetic imaging data to assess fault rupture and source parameters.Uncertainties in the NSHM, and in regional earthquake models, are reduced by fully incorporating geodetic data into earthquake probability calculations.Geodetic networks and data are integrated into the operations and earthquake information products of the Advanced National Seismic System (ANSS).Earthquake early warnings are improved by more rapidly assessing ground displacement and the dynamic faulting process for the largest earthquakes using real-time geodetic data.Methodology for probabilistic earthquake forecasting is refined by including geodetic data when calculating evolving moment release during aftershock sequences and by better understanding the implications of transient deformation for earthquake likelihood.A geodesy program that encompasses a balanced mix of activities to sustain missioncritical capabilities, grows new competencies through the continuum of fundamental to applied research, and ensures sufficient resources for these endeavors provides a foundation by which the EHP can be a leader in the application of geodesy to earthquake science. With this in mind the following objectives provide a framework to guide EHP efforts:Fully utilize geodetic information to improve key products, such as the NSHM and EEW, and to address new ventures like the USGS Subduction Zone Science Plan.Expand the variety, accuracy, and timeliness of post-earthquake information products, such as PAGER (Prompt Assessment of Global Earthquakes for Response), through incorporation of geodetic observations.Determine if geodetic measurements of transient deformation can significantly improve estimates of earthquake probability.Maintain an observational strategy aligned with the target outcomes of this document that includes continuous monitoring, recording of ephemeral observations, focused data collection for use in research, and application-driven data processing and analysis systems.Collaborate on research, development, and operation of affordable, high-precision seafloor geodetic methods that improve earthquake forecasting and event response.Advance computational techniques and instrumentation to enable use of strategies like repeat-pass imagery and low-cost geodetic sensors for earthquake response, monitoring, and research.Engage stakeholders and collaborate with partner institutions to foster operational and research objectives and to safeguard the continued health of geodetic infrastructure upon which we mutually depend.Maintaining a vibrant internal research program provides the foundation by which the EHP can remain an effective and trusted source for earthquake science. Exploiting abundant new data sources, evaluating and assimilating the latest science, and pursuing novel avenues of investigation are means to fulfilling the EHP’s core responsibilities and realizing the important scientific advances envisioned by its scientists. Central to the success of such a research program is engaging personnel with a breadth of competencies and a willingness and ability to adapt these to the program’s evolving priorities, enabling current staff to expand their skills and responsibilities, and planning holistically to meet shared workforce needs. In parallel, collaboration with external partners to support scientific investigations that complement ongoing internal research enables the EHP to strengthen earthquake information products by incorporating alternative perspectives and approaches and to study topics and geographic regions that cannot be adequately covered internally.With commensurate support from technical staff who possess diverse skills, including engineering, information technology, and proficiency in quantitative analysis combined with basic geophysical knowledge, the EHP can achieve the geodetic outcomes identified in this document.
Aftershocks halted by static stress shadows
Toda, Shinji; Stein, Ross S.; Beroza, Gregory C.; Marsan, David
2012-01-01
Earthquakes impart static and dynamic stress changes to the surrounding crust. Sudden fault slip causes small but permanent—static—stress changes, and passing seismic waves cause large, but brief and oscillatory—dynamic—stress changes. Because both static and dynamic stresses can trigger earthquakes within several rupture dimensions of a mainshock, it has proven difficult to disentangle their contributions to the triggering process1–3. However, only dynamic stress can trigger earthquakes far from the source4,5, and only static stress can create stress shadows, where the stress and thus the seismicity rate in the shadow area drops following an earthquake6–9 . Here we calculate the stress imparted by the magnitude 6.1 Joshua Tree and nearby magnitude 7.3 Landers earthquakes that occurred in California in April and June 1992, respectively, and measure seismicity through time. We show that, where the aftershock zone of the first earthquake was subjected to a static stress increase from the second, the seismicity rate jumped. In contrast, where the aftershock zone of the first earthquake fell under the stress shadow of the second and static stress dropped, seismicity shut down. The arrest of seismicity implies that static stress is a requisite element of spatial clustering of large earthquakes and should be a constituent of hazard assessment.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
NASA Astrophysics Data System (ADS)
Qin, W.; Yin, J.; Yao, H.
2013-12-01
On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for the 2011 Tohoku Mw 9.0 earthquake. Geophysical Journal International, 2012, 190(2): 1152-1168. [2]Yao H, Gerstoft P, Shearer P M, et al. Compressive sensing of the Tohoku-Oki Mw 9.0 earthquake: Frequency-dependent rupture modes. Geophysical Research Letters, 2011, 38(20).
Near Space Tracking of the EM Phenomena Associated with the Main Earthquakes
NASA Technical Reports Server (NTRS)
Ouzounov, Dimitar; Taylor, Patrick; Bryant, Nevin; Pulinets, Sergey; Liu, Jann-Yenq; Yang, Kwang-Su
2004-01-01
Searching for electromagnetic (EM) phenomena originating in the Earth's crust prior to major earthquakes (M>5) are the object of this exploratory study. We present the idea of a possible relationship between: (1) electro-chemical and thermodynamic processes in the Earth's crust and (2) ionic enhancement of the atmosphere/ionosphere with tectonic stress and earthquake activity. The major source of these signals are proposed to originate from electromagnetic phenomenon which are responsible for these observed pre-seismic processes, such as, enhanced IR emission, also born as thermal anomalies, generation of long wave radiation, light emission caused by ground-to-air electric discharges, Total Electron Content (TEC) ionospheric anomalies and ionospheric plasma variations. The source of these data will include: (i) ionospheric plasma perturbations data from the recently launched DEMETER mission and currently available TEC/GPS network data; (ii) geomagnetic data from ORSTED and CHAMP; (iii) Thermal infra-red (TIR) transients mapped by the polar orbiting (NOAA/AVHRR, MODIS) and (iv) geosynchronous weather satellites measurements of GOES, METEOSAT. This approach requires continues observations and data collecting, in addition to both ground and space based monitoring over selected regions in order to investigate the various techniques for recording possible anomalies. During the space campaign emphasis will be on IR emission, obtained from TIR (thermal infrared) satellites, that records land/sea surface temperature anomalies and changes in the plasma and total electron content (TEC) of the ionosphere that occur over areas of potential earthquake activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Paul A.
Nonlinear dynamics induced by seismic sources and seismic waves are common in Earth. Observations range from seismic strong ground motion (the most damaging aspect of earthquakes), intense near-source effects, and distant nonlinear effects from the source that have important consequences. The distant effects include dynamic earthquake triggering-one of the most fascinating topics in seismology today-which may be elastically nonlinearly driven. Dynamic earthquake triggering is the phenomenon whereby seismic waves generated from one earthquake trigger slip events on a nearby or distant fault. Dynamic triggering may take place at distances thousands of kilometers from the triggering earthquake, and includes triggering ofmore » the entire spectrum of slip behaviors currently identified. These include triggered earthquakes and triggered slow, silent-slip during which little seismic energy is radiated. It appears that the elasticity of the fault gouge-the granular material located between the fault blocks-is key to the triggering phenomenon.« less
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, S.; Brietzke, G.; Igel, H.; Larmat, C.; Fichtner, A.; Johnson, P. A.; Huang, L.
2008-12-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the source point and other information might be inferred. In this study, the backward propagation is performed numerically using a spectral element code. We investigate the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, location of asperities, rupture velocity etc.). We use synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice- rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of relaxing the ignorance to prior source information (e.g., origin time, hypocenter, fault location, etc.) on the results of the time reversal process.
Tectonic Tremor and the Collective Behavior of Low-Frequency Earthquakes
NASA Astrophysics Data System (ADS)
Frank, W.; Shapiro, N.; Husker, A. L.; Kostoglodov, V.; Campillo, M.; Gusev, A. A.
2015-12-01
Tectonic tremor, a long duration, emergent seismic signal observed along the deep roots of plate interfaces, is thought to be the superposition of repetitive shear events called low-frequency earthquakes (LFE) [e.g. Shelly et al., Nature, 2007]. We use a catalog of more than 1.8 million LFEs regrouped into more than 1000 families observed over 2 years in the Guerrero subduction zone in Mexico, considering each family as an individual repetitive source or asperity. We develop a statistical analysis to determine whether the subcatalogs corresponding to different sources represent random Poisson processes or if they exhibit scale-invariant clustering in time, which we interpret as a manifestation of collective behavior. For each individual LFE source, we compare their level of collective behavior during two time periods: during the six-month-long 2006 Mw 7.5 slow-slip event and during a calm period with no observed slow slip. We find that the collective behavior of LFEs depends on distance from the trench and increases when the subduction interface is slowly slipping. Our results suggest that the occurrence of strong episodes of tectonic tremors cannot be simply explained by increased rates of low frequency earthquakes at every individual LFE source but correspond to an enhanced collective behavior of the ensemble of LFE asperities.
NASA Astrophysics Data System (ADS)
Zheng, A.; Zhang, W.
2016-12-01
On 15 April, 2016 the great earthquake with magnitude Mw7.1 occurred in Kumamoto prefecture, Japan. The focal mechanism solution released by F-net located the hypocenter at 130.7630°E, 32.7545°N, at a depth of 12.45 km, and the strike, dip, and the rake angle of the fault were N226°E, 84° and -142° respectively. The epicenter distribution and focal mechanisms of aftershocks implied the mechanism of the mainshock might have changed in the source rupture process, thus a single focal mechanism was not enough to explain the observed data adequately. In this study, based on the inversion result of GNSS and InSAR surface deformation with active structures for reference, we construct a finite fault model with focal mechanism changes, and derive the source rupture process by multi-time-window linear waveform inversion method using the strong-motion data (0.05 1.0Hz) obtained by K-NET and KiK-net of Japan. Our result shows that the Kumamoto earthquake is a right-lateral strike slipping rupture event along the Futagawa-Hinagu fault zone, and the seismogenic fault is divided into a northern segment and a southern one. The strike and the dip of the northern segment are N235°E, 60° respectively. And for the southern one, they are N205°E, 72° respectively. The depth range of the fault model is consistent with the depth distribution of aftershocks, and the slip on the fault plane mainly concentrate on the northern segment, in which the maximum slip is about 7.9 meter. The rupture process of the whole fault continues for approximately 18-sec, and the total seismic moment released is 5.47×1019N·m (Mw 7.1). In addition, the essential feature of the distribution of PGV and PGA synthesized by the inversion result is similar to that of observed PGA and seismic intensity.
NASA Astrophysics Data System (ADS)
Liu, B.; Shi, B.
2010-12-01
An earthquake with ML4.1 occurred at Shacheng, Hebei, China, on July 20, 1995, followed by 28 aftershocks with 0.9≤ML≤4.0 (Chen et al, 2005). According to ZÚÑIGA (1993), for the 1995 ML4.1 Shacheng earthquake sequence, the main shock is corresponding to undershoot, while aftershocks should match overshoot. With the suggestion that the dynamic rupture processes of the overshoot aftershocks could be related to the crack (sub-fault) extension inside the main fault. After main shock, the local stresses concentration inside the fault may play a dominant role in sustain the crack extending. Therefore, the main energy dissipation mechanism should be the aftershocks fracturing process associated with the crack extending. We derived minimum radiation energy criterion (MREC) following variational principle (Kanamori and Rivera, 2004)(ES/M0')min≧[3M0/(ɛπμR3)](v/β)3, where ES and M0' are radiated energy and seismic moment gained from observation, μ is the modulus of fault rigidity, ɛ is the parameter of ɛ=M0'/M0,M0 is seismic moment and R is rupture size on the fault, v and β are rupture speed and S-wave speed. From II and III crack extending model, we attempt to reconcile a uniform expression for calculate seismic radiation efficiency ηG, which can be used to restrict the upper limit efficiency and avoid the non-physics phenomenon that radiation efficiency is larger than 1. In ML 4.1 Shacheng earthquake sequence, the rupture speed of the main shock was about 0.86 of S-wave speed β according to MREC, closing to the Rayleigh wave speed, while the rupture speeds of the remained 28 aftershocks ranged from 0.05β to 0.55β. The rupture speed was 0.9β, and most of the aftershocks are no more than 0.35β using II and III crack extending model. In addition, the seismic radiation efficiencies for this earthquake sequence were: for the most aftershocks, the radiation efficiencies were less than 10%, inferring a low seismic efficiency, whereas the radiation efficiency was 78% for the main shock. The essential difference in the earthquake energy partition for the aftershock source dynamics indicated that the fracture energy dissipation could not be ignored in the source parameter estimation for the earthquake faulting, especially for small earthquakes. Otherwise, the radiated seismic energy could be overestimated or underestimated.
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
Amending and complicating Chile’s seismic catalog with the Santiago earthquake of 7 August 1580
NASA Astrophysics Data System (ADS)
Cisternas, Marco; Torrejón, Fernando; Gorigoitia, Nicolás
2012-02-01
Historical earthquakes of Chile's metropolitan region include a previously uncatalogued earthquake that occurred on 7 August 1580 in the Julian calendar. We found an authoritative account of this earthquake in a letter written four days later in Santiago and now archived in Spain. The letter tells of a destructive earthquake that struck Santiago and its environs. In its reported effects it surpassed the one in the same city in 1575, until now presumed to be the only earthquake in the first century of central Chile's written history. It is not yet possible to identify the source of the 1580 earthquake but viable candidates include both the plate boundary and Andean faults at shallows depths around Santiago. By occurring just five years after another large earthquake, the 1580 earthquake casts doubt on the completeness of the region's historical earthquake catalog and the periodicity of its large earthquakes. That catalog, based on eyewitness accounts compiled mainly by Alexander Perrey and Fernand Montessus de Ballore, tells of large Chile's metropolitan region earthquakes in 1575, 1647, 1730, 1822, 1906 and 1985. The addition of a large earthquake in 1580 implies greater variability in recurrence intervals and may also mean greater variety in earthquake sources.
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Huang, Y.
2017-12-01
A clear understanding of the source physics of induced seismicity is the key to effective seismic hazard mitigation. In particular, resolving their rupture processes can shed lights on the stress state prior to the main shock, as well as ground motion response. Recent numerical models suggest that, compared to their tectonic counterpart, induced earthquake rupture is more prone to propagate unilaterally toward the injection well where fluid pressure is high. However, this is also dependent on the location of the injection relative to the fault and yet to be compared with field data. In this study, we utilize the rich pool of seismic data in the central US to constrain the rupture processes of major induced earthquakes. By implementing a forward-modeling method, we take smaller earthquake recordings as empirical Green's functions (eGf) to simulate the rupture direction of the beginning motion generated by large events. One advantage of the empirical approach is to bypass the fundamental difficulty in resolving path and site effects. We selected eGf events that are close to the target events both in space and time. For example, we use a Mw 3.6 aftershock approximately 3 km from the 2011 Mw 5.7 earthquake in Prague, OK as its eGf event. Preliminary results indicate a southwest rupture for the Prague main shock, which possibly implies a higher fluid pressure concentration on the northeast end of the fault prior to the rupture. We will present further results on other Mw > 4.5 earthquakes in the States of Oklahoma and Kansas. With additional seismic stations installed in the past few years, events such as the 2014 Mw 4.9 Milan earthquake and the 2016 Mw 5.8 Pawnee earthquake are potential candidates with useful eGfs, as they both have good data coverage and a substantial number of aftershocks nearby. We will discuss the implication of our findings for the causative relationships between the injection operations and the induced rupture process.
Seismicity pattern: an indicator of source region of volcanism at convergent plate margins
NASA Astrophysics Data System (ADS)
Špičák, Aleš; Hanuš, Václav; Vaněk, Jiří
2004-04-01
The results of detailed investigation into the geometry of distribution of earthquakes around and below the volcanoes Korovin, Cleveland, Makushin, Yake-Dake, Oshima, Lewotobi, Fuego, Sangay, Nisyros and Montagne Pelée at convergent plate margins are presented. The ISC hypocentral determinations for the period 1964-1999, based on data of global seismic network and relocated by Engdahl, van der Hilst and Buland, have been used. The aim of this study has been to contribute to the solution of the problem of location of source regions of primary magma for calc-alkaline volcanoes spatially and genetically related to the process of subduction. Several specific features of seismicity pattern were revealed in this context. (i) A clear occurrence of the intermediate-depth aseismic gap (IDAG) in the Wadati-Benioff zone (WBZ) below all investigated active volcanoes. We interpret this part of the subducted slab, which does not contain any teleseismically recorded earthquake with magnitude greater than 4.0, as a partially melted domain of oceanic lithosphere and as a possible source of primary magma for calc-alkaline volcanoes. (ii) A set of earthquakes in the shape of a seismically active column (SAC) seems to exists in the continental wedge below volcanoes Korovin, Makushin and Sangay. The seismically active columns probably reach from the Earth surface down to the aseismic gap in the Wadati-Benioff zone. This points to the possibility that the upper mantle overlying the subducted slab does not contain large melted domains, displays an intense fracturing and is not likely to represent the site of magma generation. (iii) In the continental wedge below the volcanoes Cleveland, Fuego, Nisyros, Yake-Dake, Oshima and Lewotobi, shallow seismicity occurs down to the depth of 50 km. The domain without any earthquakes between the shallow seismically active column and the aseismic gap in the Wadati-Benioff zone in the depth range of 50-100 km does not exclude the melting of the mantle also above the slab. (iv) Any earthquake does not exist in the lithospheric wedge below the volcano Montagne Pelée. The source of primary magma could be located in the subducted slab as well as in the overlying mantle wedge. (v) Frequent aftershock sequences accompanying stronger earthquakes in the seismically active columns indicate high fracturing of the wedge below active volcanoes. (vi) The elongated shape of clusters of epicentres of earthquakes of seismically active columns, as well as stable parameters of the available fault plane solutions, seem to reflect the existence of dominant deeply rooted fracture zones below volcanoes. These facts also favour the location of primary magma in the subducting slab rather than in the overlying wedge. We suppose that melts advancing from the slab toward the Earth surface may trigger the observed earthquakes in the continental wedge that is critically pre-stressed by the process of subduction. However, for definitive conclusions it will be necessary to explain the occurrence of earthquake clusters below some volcanoes and the lack of seismicity below others, taking into account the uncertainty of focal depth determination from global seismological data in some regions.
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie
2017-04-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
NASA Astrophysics Data System (ADS)
Vater, Stefan; Behrens, Jörn
2017-04-01
Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.
Broadband Rupture Process of the 2001 Kunlun Fault (Mw 7.8) Earthquake
NASA Astrophysics Data System (ADS)
Antolik, M.; Abercrombie, R.; Ekstrom, G.
2003-04-01
We model the source process of the 14 November, 2001 Kunlun fault earthquake using broadband body waves from the Global Digital Seismographic Network (P, SH) and both point-source and distributed slip techniques. The point-source mechanism technique is a non-linear iterative inversion that solves for focal mechanism, moment rate function, depth, and rupture directivity. The P waves reveal a complex rupture process for the first 30 s, with smooth unilateral rupture toward the east along the Kunlun fault accounting for the remainder of the 120 s long rupture. The obtained focal mechanism for the main portion of the rupture is (strike=96o, dip=83o, rake=-8o) which is consistent with both the Harvard CMT solution and observations of the surface rupture. The seismic moment is 5.29×1020 Nm and the average rupture velocity is ˜3.5 km/s. However, the initial portion of the P waves cannot be fit at all with this mechanism. A strong pulse visible in the first 20 s can only be matched with an oblique-slip subevent (MW ˜ 6.8-7.0) involving a substantial normal faulting component, but the nodal planes of this mechanism are not well constrained. The first-motion polarities of the P waves clearly require a strike mechanism with a similar orientation as the Kunlun fault. Field observations of the surface rupture (Xu et al., SRL, 73, No. 6) reveal a small 26 km-long strike-slip rupture at the far western end (90.5o E) with a 45-km long gap and extensional step-over between this rupture and the main Kunlun fault rupture. We hypothesize that the initial fault break occurred on this segment, with release of the normal faulting energy as a continuous rupture through the extensional step, enabling transfer of the slip to the main Kunlun fault. This process is similar to that which occurred during the 2002 Denali fault (MW 7.9) earthquake sequence except that 11 days elapsed between the October 23 (M_W 6.7) foreshock and the initial break of the Denali earthquake along a thrust fault.
Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering
Geist, Eric L.
2012-01-01
Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.
Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes
NASA Astrophysics Data System (ADS)
Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.
2015-12-01
Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada. Preliminary analysis indicates that the induced ground motions appear similar to those from the NGA-West2 database. However, upon consideration of their shallower depths, ground motion behavior from induced events seems to fall in between the West data and that of NGA-East, so we explore the control of stress drop and depth on ground motion in more detail.
Modeling the Fluid Withdraw and Injection Induced Earthquakes
NASA Astrophysics Data System (ADS)
Meng, C.
2016-12-01
We present an open source numerical code, Defmod, that allows one to model the induced seismicity in an efficient and standalone manner. The fluid withdraw and injection induced earthquake has been a great concern to the industries including oil/gas, wastewater disposal and CO2 sequestration. Being able to numerically model the induced seismicity is long desired. To do that, one has to consider at lease two processes, a steady process that describes the inducing and aseismic stages before and in between the seismic events, and an abrupt process that describes the dynamic fault rupture accompanied by seismic energy radiations during the events. The steady process can be adequately modeled by a quasi-static model, while the abrupt process has to be modeled by a dynamic model. In most of the published modeling works, only one of these processes is considered. The geomechanicists and reservoir engineers are focused more on the quasi-static modeling, whereas the geophysicists and seismologists are focused more on the dynamic modeling. The finite element code Defmod combines these two models into a hybrid model that uses the failure criterion and frictional laws to adaptively switch between the (quasi-)static and dynamic states. The code is capable of modeling episodic fault rupture driven by quasi-static loading, e.g. due to reservoir fluid withdraw and/or injection, and by dynamic loading, e.g. due to the foregoing earthquakes. We demonstrate a case study for the 2013 Azle earthquake.
SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.
Mueller, Charles S.
1985-01-01
Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.
NASA Astrophysics Data System (ADS)
Chen, X.; Abercrombie, R. E.; Pennington, C.
2017-12-01
Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.
A Bayesian approach to earthquake source studies
NASA Astrophysics Data System (ADS)
Minson, Sarah
Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.
Earthquake and submarine landslide tsunamis: how can we tell the difference? (Invited)
NASA Astrophysics Data System (ADS)
Tappin, D. R.; Grilli, S. T.; Harris, J.; Geller, R. J.; Masterlark, T.; Kirby, J. T.; Ma, G.; Shi, F.
2013-12-01
Several major recent events have shown the tsunami hazard from submarine mass failures (SMF), i.e., submarine landslides. In 1992 a small earthquake triggered landslide generated a tsunami over 25 meters high on Flores Island. In 1998 another small, earthquake-triggered, sediment slump-generated tsunami up to 15 meters high devastated the local coast of Papua New Guinea killing 2,200 people. It was this event that led to the recognition of the importance of marine geophysical data in mapping the architecture of seabed sediment failures that could be then used in modeling and validating the tsunami generating mechanism. Seabed mapping of the 2004 Indian Ocean earthquake rupture zone demonstrated, however, that large, if not great, earthquakes do not necessarily cause major seabed failures, but that along some convergent margins frequent earthquakes result in smaller sediment failures that are not tsunamigenic. Older events, such as Messina, 1908, Makran, 1945, Alaska, 1946, and Java, 2006, all have the characteristics of SMF tsunamis, but for these a SMF source has not been proven. When the 2011 tsunami struck Japan, it was generally assumed that it was directly generated by the earthquake. The earthquake has some unusual characteristics, such as a shallow rupture that is somewhat slow, but is not a 'tsunami earthquake.' A number of simulations of the tsunami based on an earthquake source have been published, but in general the best results are obtained by adjusting fault rupture models with tsunami wave gauge or other data so, to the extent that they can model the recorded tsunami data, this demonstrates self-consistency rather than validation. Here we consider some of the existing source models of the 2011 Japan event and present new tsunami simulations based on a combination of an earthquake source and an SMF mapped from offshore data. We show that the multi-source tsunami agrees well with available tide gauge data and field observations and the wave data from offshore buoys, and that the SMF generated the large runups in the Sanriku region (northern Tohoku). Our new results for the 2011 Tohoku event suggest that care is required in using tsunami wave and tide gauge data to both model and validate earthquake tsunami sources. They also suggest a potential pitfall in the use of tsunami waveform inversion from tide gauges and buoys to estimate the size and spatial characteristics of earthquake rupture. If the tsunami source has a significant SMF component such studies may overestimate earthquake magnitude. Our seabed mapping identifies other large SMFs off Sanriku that have the potential to generate significant tsunamis and which should be considered in future analyses of the tsunami hazard in Japan. The identification of two major SMF-generated tsunamis (PNG and Tohoku), especially one associated with a M9 earthquake, is important in guiding future efforts at forecasting and mitigating the tsunami hazard from large megathrust plus SMF events both in Japan and globally.
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...
2016-06-08
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
NASA Astrophysics Data System (ADS)
Klyuchevskii, A. V.; Dem'yanovich, V. M.
2006-05-01
Investigation and understanding of the present-day geodynamic situation are of key importance for the elucidation of the laws and evolution of the seismic process in a seismically active region. In this work, seismic moments of nearly 26000 earthquakes with K p ≥ 7 ( M LH ≥ 2) that occurred in the southern Baikal region and northern Mongolia (SBNM) (48° 54°N, 96° 108°E) from 1968 through 1994 are determined from amplitudes and periods of maximum displacements in transverse body waves. The resulting set of seismic moments is used for spatial-temporal analysis of the stress-strain state of the SBNM lithosphere. The stress fields of the Baikal rift and the India-Asia collision zone are supposed to interact in the region studied. Since the seismic moment of a tectonic earthquake depends on the type of motion in the source, seismic moments and focal mechanisms of earthquakes belonging to four long-term aftershock and swarm clusters of shocks in the Baikal region were used to “calibrate” average seismic moments in accordance with the source faulting type. The study showed that the stress-strain state of the SBNM lithosphere is spatially inhomogeneous and nonstationary. A space-time discrepancy is observed in the formation of faulting types in sources of weak ( K p = 7 and 8) and stronger ( K p ≥ 9) earthquakes. This discrepancy is interpreted in terms of rock fracture at various hierarchical levels of ruptures on differently oriented general, regional, and local faults. A gradual increase and an abrupt, nearly pulsed, decrease in the vertical component of the stress field S v is a characteristic feature of time variations. The zones where the stress S v prevails are localized at “singular points” of the lithosphere. Shocks of various energy classes in these zones are dominated by the normal-fault slip mechanism. For earthquakes with K p = 9, the source faulting changes with depth from the strike-slip type to the normal-strike-slip and normal types, suggesting an increase in S v . On the whole, the results of this study are well consistent with the synergism of open unstable dissipative systems and are usable for interpreting the main observable variations in the stress-strain state of the lithosphere in terms of spatiotemporal variations in the vertical component of the stress field S v . This suggests the influence of rifting on the present-day geodynamic processes in the SBNM lithosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzooqi, Y A; Abou Elenean, K M; Megahed, A S
2008-02-29
On March 10 and September 13, 2007 two felt earthquakes with moment magnitudes 3.66 and 3.94 occurred in the eastern part of United Arab Emirates (UAE). The two events were accompanied by few smaller events. Being well recorded by the digital UAE and Oman digital broadband stations, they provide us an excellent opportunity to study the tectonic process and present day stress field acting on this area. In this study, we determined the focal mechanisms of the two main shocks by two methods (polarities of P and regional waveform inversion). Our results indicate a normal faulting mechanism with slight strikemore » slip component for the two studied events along a fault plane trending NNE-SSW in consistent a suggested fault along the extension of the faults bounded Bani Hamid area. The Seismicity distribution between two earthquake sequences reveals a noticeable gap that may be a site of a future event. The source parameters (seismic moment, moment magnitude, fault radius, stress drop and displacement across the fault) were also estimated based on the far field displacement spectra and interpreted in the context of the tectonic setting.« less
NASA Astrophysics Data System (ADS)
Major, J. R.; Liu, Z.; Harris, R. A.; Fisher, T. L.
2011-12-01
Using Dutch records of geophysical events in Indonesia over the past 400 years, and tsunami modeling, we identify tsunami sources that have caused severe devastation in the past and are likely to reoccur in the near future. The earthquake history of Western Indonesia has received much attention since the 2004 Sumatra earthquakes and subsequent events. However, strain rates along a variety of plate boundary segments are just as high in eastern Indonesia where the earthquake history has not been investigated. Due to the rapid population growth in this region it is essential and urgent to evaluate its earthquake and tsunami hazards. Arthur Wichmann's 'Earthquakes of the Indian Archipelago' shows that there were 30 significant earthquakes and 29 tsunami between 1629 to 1877. One of the largest and best documented is the great earthquake and tsunami effecting the Banda islands on 1 August, 1629. It caused severe damage from a 15 m tsunami that arrived at the Banda Islands about a half hour after the earthquake. The earthquake was also recorded 230 km away in Ambon, but no tsunami is mentioned. This event was followed by at least 9 years of aftershocks. The combination of these observations indicates that the earthquake was most likely a mega-thrust event. We use a numerical simulation of the tsunami to locate the potential sources of the 1629 mega-thrust event and evaluate the tsunami hazard in Eastern Indonesia. The numerical simulation was tested to establish the tsunami run-up amplification factor for this region by tsunami simulations of the 1992 Flores Island (Hidayat et al., 1995) and 2006 Java (Katoet al., 2007) earthquake events. The results yield a tsunami run-up amplification factor of 1.5 and 3, respectively. However, the Java earthquake is a unique case of slow rupture that was hardly felt. The fault parameters of recent earthquakes in the Banda region are used for the models. The modeling narrows the possibilities of mega-thrust events the size of the one in 1629 to the Seram and Timor Troughs. For the Seram Trough source a Mw 8.8 produces run-up heights in the Banda Islands of 15.5 m with an arrival time of 17 minuets. For a Timor Trough earthquake near the Tanimbar Islands a Mw 9.2 is needed to produce a 15 m run-up height with an arrival time of 25 minuets. The main problem with the Timor Trough source is that it predicts run-up heights in Ambon of 10 m, which would likely have been recorded. Therefore, we conclude that the most likely source of the 1629 mega-thrust earthquake is the Seram Trough. No large earthquakes are reported along the Seram Trough for over 200 years although high rates of strain are measured across it. This study suggests that the earthquake triggers from this fault zone could be extremely devastating to Eastern Indonesia. We strive to raise the awareness to the local government to not underestimate the natural hazard of this region based on lessons learned from the 2004 Sumatra and 2011 Tohoku tsunamigenic mega-thrust earthquakes.
NASA Astrophysics Data System (ADS)
Ding, R.; He, T.
2017-12-01
With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people depended on various factors, including earthquake occurrence time and location of epicenter. This research hopes to continue to build upon the success of the prototype system in order to improve and extend the system to support the analysis of earthquakes and other types of natural hazard events.
Toward Broadband Source Modeling for the Himalayan Collision Zone
NASA Astrophysics Data System (ADS)
Miyake, H.; Koketsu, K.; Kobayashi, H.; Sharma, B.; Mishra, O. P.; Yokoi, T.; Hayashida, T.; Bhattarai, M.; Sapkota, S. N.
2017-12-01
The Himalayan collision zone is characterized by the significant tectonic setting. There are earthquakes with low-angle thrust faulting as well as continental outerrise earthquakes. Recently several historical earthquakes have been identified by active fault surveys [e.g., Sapkota et al., 2013]. We here investigate source scaling for the Himalayan collision zone as a fundamental factor to construct source models toward seismic hazard assessment. As for the source scaling for collision zones, Yen and Ma [2011] reported the subduction-zone source scaling in Taiwan, and pointed out the non-self-similar scaling due to the finite crustal thickness. On the other hand, current global analyses of stress drop do not show abnormal values for the continental collision zones [e.g., Allmann and Shearer, 2009]. Based on the compile profiling of finite thickness of the curst and dip angle variations, we discuss whether the bending exists for the Himalayan source scaling and implications on stress drop that will control strong ground motions. Due to quite low-angle dip faulting, recent earthquakes in the Himalayan collision zone showed the upper bound of the current source scaling of rupture area vs. seismic moment (< Mw 8.0), and does not show significant bending of the source scaling. Toward broadband source modeling for ground motion prediction, we perform empirical Green's function simulations for the 2009 Butan and 2015 Gorkha earthquake sequence to quantify both long- and short-period source spectral levels.
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Dunajecka, M. A.
2007-02-01
The recent development of the Lithosphere-Atmosphere-Ionosphere (LAI) coupling model and experimental data of remote sensing satellites on thermal anomalies before major strong earthquakes have demonstrated that radon emanations in the area of earthquake preparation can produce variations of the air temperature and relative humidity. Specific repeating pattern of humidity and air temperature variations was revealed as a result of analysis of the meteorological data for several tens of strong earthquakes all over the world. The main physical process responsible for the observed variations is the latent heat release due to water vapor condensation on ions produced as a result of air ionization by energetic α-particles emitted by 222Rn. The high effectiveness of this process was proved by the laboratory and field experiments; hence the specific variations of air humidity and temperature can be used as indicator of radon variations before earthquakes. We analyzed the historical meteorological data all over the Mexico around the time of one of the most destructive earthquakes (Michoacan earthquake M8.1) that affected the Mexico City on September 19, 1985. Several distinct zones of specific variations of the air temperature and relative humidity were revealed that may indicate the different character of radon variations in different parts of Mexico before the Michoacan earthquake. The most interesting result on the specific variations of atmosphere parameters was obtained at Baja California region close to the border of Cocos and Rivera tectonic plates. This result demonstrates the possibility of the increased radon variations not only in the vicinity of the earthquake source but also at the border of interacting tectonic plates. Recent results on Thermal InfraRed (TIR) anomalies registered by Meteosat 5 before the Gujarat earthquake M7.9 on 26 of January 2001 supports the idea on the possibility of thermal effects at the border of interacting tectonic plates.
NASA Astrophysics Data System (ADS)
Blank, D. G.; Morgan, J.
2017-12-01
Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.
NASA Astrophysics Data System (ADS)
Cocco, M.; Feuillet, N.; Nostro, C.; Musumeci, C.
2003-04-01
We investigate the mechanical interactions between tectonic faults and volcanic sources through elastic stress transfer and discuss the results of several applications to Italian active volcanoes. We first present the stress modeling results that point out a two-way coupling between Vesuvius eruptions and historical earthquakes in Southern Apennines, which allow us to provide a physical interpretation of their statistical correlation. Therefore, we explore the elastic stress interaction between historical eruptions at the Etna volcano and the largest earthquakes in Eastern Sicily and Calabria. We show that the large 1693 seismic event caused an increase of compressive stress along the rift zone, which can be associated to the lack of flank eruptions of the Etna volcano for about 70 years after the earthquake. Moreover, the largest Etna eruptions preceded by few decades the large 1693 seismic event. Our modeling results clearly suggest that all these catastrophic events are tectonically coupled. We also investigate the effect of elastic stress perturbations on the instrumental seismicity caused by magma inflation at depth both at the Etna and at the Alban Hills volcanoes. In particular, we model the seismicity pattern at the Alban Hills volcano (central Italy) during a seismic swarm occurred in 1989-90 and we interpret it in terms of Coulomb stress changes caused by magmatic processes in an extensional tectonic stress field. We verify that the earthquakes occur in areas of Coulomb stress increase and that their faulting mechanisms are consistent with the stress perturbation induced by the volcanic source. Our results suggest a link between faults and volcanic sources, which we interpret as a tectonic coupling explaining the seismicity in a large area surrounding the volcanoes.
Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Woo, Gordon
2017-04-01
For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.
STSHV a teleinformatic system for historic seismology in Venezuela
NASA Astrophysics Data System (ADS)
Choy, J. E.; Palme, C.; Altez, R.; Aranguren, R.; Guada, C.; Silva, J.
2013-05-01
From 1997 on, when the first "Jornadas Venezolanas de Sismicidad Historica" took place, a big interest awoke in Venezuela to organize the available information related to historic earthquakes. At that moment only existed one published historic earthquake catalogue, that from Centeno Grau published the first time in 1949. That catalogue had no references about the sources of information. Other catalogues existed but they were internal reports for the petroleum companies and therefore difficult to access. In 2000 Grases et al reedited the Centeno-Grau catalogue, it ended up in a new, very complete catalogue with all the sources well referenced and updated. The next step to organize historic seismicity data was, from 2004 to 2008, the creation of the STSHV (Sistema de teleinformacion de Sismologia Historica Venezolana, http://sismicidad.hacer.ula.ve ). The idea was to bring together all information about destructive historic earthquakes in Venezuela in one place in the internet so it could be accessed easily by a widespread public. There are two ways to access the system. The first one, selecting an earthquake or a list of earthquakes, and the second one, selecting an information source or a list of sources. For each earthquake there is a summary of general information and additional materials: a list with the source parameters published by different authors, a list with intensities assessed by different authors, a list of information sources, a short text summarizing the historic situation at the time of the earthquake and a list of pictures if available. There are searching facilities for the seismic events and dynamic maps can be created. The information sources are classified in: books, handwritten documents, transcription of handwritten documents, documents published in books, journals and congress memories, newspapers, seismologic catalogues and electronic sources. There are facilities to find specific documents or lists of documents with common characteristics. For each document general information is displayed together with an extract of the information relating to the earthquake. If the complete document was available and no problem with the publishers rights a pdf copy of the document was included. We found this system extremely useful for studying historic earthquakes, as one can access immediately previous research works about an earthquake and it allows to check easily the historic information and so to validate the intensity data. So far, the intensity data have not been completed for earthquakes after 2000. This information would be important for improving calibration of intensity - magnitude calibrations of historic events, and is a work in progress. On the other hand, it is important to mention that "El Catálogo Sismológico Venezolano del siglo XX" (The Seismological Venezuelan Catalog), published in 2012, updates seismic information up to 2007, and that the STSHV was one of its primary sources of information.
Numerical and laboratory simulation of fault motion and earthquake occurrence
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1978-01-01
Simple linear rheologies were used with elastic forces driving the main events and viscoelastic forces being important for aftershock and creep occurrence. Friction and its dependence on velocity, stress, and displacement also plays a key role in determining how, when, and where fault motion occurs. The discussion of the qualitative behavior of the simulators focuses on the manner in which energy was stored in the system and released by the unstable and stable sliding processes. The numerical results emphasize the statistics of earthquake occurrence and the correlations among source parameters.
Tsunami Source Estimate for the 1960 Chilean Earthquake from Near- and Far-Field Observations
NASA Astrophysics Data System (ADS)
Ho, T.; Satake, K.; Watada, S.; Fujii, Y.
2017-12-01
The tsunami source of the 1960 Chilean earthquake was estimated from the near- and far-field tsunami data. The 1960 Chilean earthquake is known as the greatest earthquake instrumentally ever recorded. This earthquake caused a large tsunami which was recorded by 13 near-field tidal gauges in South America, and 84 far-field stations around the Pacific Ocean at the coasts of North America, Asia, and Oceania. The near-field stations had been used for estimating the tsunami source [Fujii and Satake, Pageoph, 2013]. However, far-field tsunami waveforms have not been utilized because of the discrepancy between observed and simulated waveforms. The observed waveforms at the far-field stations are found systematically arrived later than the simulated waveforms. This phenomenon has been also observed in the tsunami of the 2004 Sumatra earthquake, the 2010 Chilean earthquake, and the 2011 Tohoku earthquake. Recently, the factors for the travel time delay have been explained [Watada et al., JGR, 2014; Allgeyer and Cummins, GRL, 2014], so the far-field data are usable for tsunami source estimation. The phase correction method [Watada et al., JGR, 2014] converts the tsunami waveforms computed by the linear long wave into the dispersive waveform which accounts for the effects of elasticity of the Earth and ocean, ocean density stratification, and gravitational potential change associated with tsunami propagation. We apply the method to correct the computed waveforms. For the preliminary initial sea surface height inversion, we use 12 near-field stations and 63 far-field stations, located in the South and North America, islands in the Pacific Ocean, and the Oceania. The estimated tsunami source from near-field stations is compared with the result from both near- and far-field stations. Two estimated sources show a similar pattern: a large sea surface displacement concentrated at the south of the epicenter close to the coast and extended to south. However, the source estimated from near-field stations shows larger displacement than one from both dataset.
Tectonic tremor activity associated with teleseismic and nearby earthquakes
NASA Astrophysics Data System (ADS)
Chao, K.; Obara, K.; Peng, Z.; Pu, H. C.; Frank, W.; Prieto, G. A.; Wech, A.; Hsu, Y. J.; Yu, C.; Van der Lee, S.; Apley, D. W.
2016-12-01
Tectonic tremor is an extremely stress-sensitive seismic phenomenon located in the brittle-ductile transition section of a fault. To better understand the stress interaction between tremor and earthquake, we conduct the following studies: (1) search for triggered tremor globally, (2) examine ambient tremor activities associated with distant earthquakes, and (3) quantify the temporal variation of ambient tremor activity before and after nearby earthquakes. First, we developed a Matlab toolbox to enhance the searching of triggered tremor globally. We have discovered new tremor sources in the inland faults in Kyushu, Kanto, and Hokkaido in Japan, southern Chile, Ecuador, and central Colombia in South America, and in South Italy. Our findings suggest that tremor is more common than previously believed and indicate the potential existence of ambient tremor in the triggered tremor active regions. Second, we adapt the statistical analysis to examine whether the long-term ambient tremor rate may affect by the dynamic stress of teleseismic earthquakes. We analyzed the data in Nankai, Hokkaido, Cascadia, and Taiwan. Our preliminary results did not show an apparent increase of ambient tremor rate after the passing of surface waves. Third, we quantify temporal changes in ambient tremor activity before and after the occurrence of local earthquakes under the southern Central Range of Taiwan with magnitudes of >=5.5 from 2004 to 2016. For a particular case, we found a temporal variation of tremor rate before and after the 2010/03/04 Mw6.3 earthquake, located about 20 km away from the active tremor source. The long-term increase in the tremor rate after the earthquake could have been caused by an increase in static stress following the mainshock. For comparison, clear evidence from seismic and GPS observations indicate a short-term increase in the tremor rate a few weeks before the mainshock. The increase in the tremor rate before the mainshock could correlate with stress changes in the earthquake rupture zone. Our study provides direct observations to imply that the stress-sensitive tectonic tremor may reflect stress variation during the nucleation process of a nearby earthquake.
Ruppert, N.A.; Prejean, S.; Hansen, R.A.
2011-01-01
An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field. Copyright ?? 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Earnest, A.; Sunil, T. C.
2014-12-01
The recent earthquake of Mw 6.9 occurred on September 18, 2011 in Sikkim-Nepal border region. The hypocenter parameters determined by the Indian Meteorological Department shows that the epicentre is at 27.7°N, 88.2°E and focal depth of 58Km, located closed to the north-western terminus of Tista lineament. The reported aftershocks are linearly distributed in between Tista and Golapara lineament. The microscopic and geomorphologic studies infer a dextral strike-slip faulting, possibly along a NW-SE oriented fault. Landslides caused by this earthquake are distributed along Tista lineament . On the basis of the aftershock distribution, Kumar et al. (2012), have suggested possible NW orientation of the causative fault plane. The epicentral region of Sikkim bordered by Nepal, Bhutan and Tibet, comprises a segment of relatively lower level seismicity in the 2500km stretch of the active Himalayan Belt. The north Sikkim earthquake was felt in most parts of Sikkim and eastern Nepal; it killed more than 100 people and caused damage to buildings, roads and communication infrastructure. Through this study we focus on the earthquake source parameters and the kinematic rupture process of this particular event. We used tele-seismic body waveformsto determine the rupture pattern of earthquake. The seismic-rupture pattern are generally complex, and the result could be interpreted in terms of a distribution of asperities and barriers on the particular fault plane (Kikuchi and Kanamori, 1991).The methodology we adopted is based on the teleseismic body wave inversion methodology by Kikuchi and Kanamori (1982, 1986 and 1991). We used tele-seismic P-wave records observed at teleseismic distances between 50° and 90° with a good signal to noise ratio. Teleseismic distances in the range between 50° and 90° were used, in order to avoid upper mantle and core triplications and to limit the path length within the crust. Synthetic waveforms were generated gives a better fit with triangular source time function duration, in order to determine the components of the moment tensor and the focal depth of the main shock. we will discussing the average stress drop and the possible mechanisms on the depth of the event at a region well known for events beyond moho transsion zone.
Hanson, Stanley L.; Perkins, David M.
1995-01-01
The construction of a probabilistic ground-motion hazard map for a region follows a sequence of analyses beginning with the selection of an earthquake catalog and ending with the mapping of calculated probabilistic ground-motion values (Hanson and others, 1992). An integral part of this process is the creation of sources used for the calculation of earthquake recurrence rates and ground motions. These sources consist of areas and lines that are representative of geologic or tectonic features and faults. After the design of the sources, it is necessary to arrange the coordinate points in a particular order compatible with the input format for the SEISRISK-III program (Bender and Perkins, 1987). Source zones are usually modeled as a point-rupture source. Where applicable, linear rupture sources are modeled with articulated lines, representing known faults, or a field of parallel lines, representing a generalized distribution of hypothetical faults. Based on the distribution of earthquakes throughout the individual source zones (or a collection of several sources), earthquake recurrence rates are computed for each of the sources, and a minimum and maximum magnitude is assigned. Over a period of time from 1978 to 1980 several conferences were held by the USGS to solicit information on regions of the United States for the purpose of creating source zones for computation of probabilistic ground motions (Thenhaus, 1983). As a result of these regional meetings and previous work in the Pacific Northwest, (Perkins and others, 1980), California continental shelf, (Thenhaus and others, 1980), and the Eastern outer continental shelf, (Perkins and others, 1979) a consensus set of source zones was agreed upon and subsequently used to produce a national ground motion hazard map for the United States (Algermissen and others, 1982). In this report and on the accompanying disk we provide a complete list of source areas and line sources as used for the 1982 and later 1990 seismic hazard maps for the conterminous U.S. and Alaska. These source zones are represented in the input form required for the hazard program SEISRISK-III, and they include the attenuation table and several other input parameter lines normally found at the beginning of an input data set for SEISRISK-III.
NASA Astrophysics Data System (ADS)
De Novellis, V.; Carlino, S.; Castaldo, R.; Tramelli, A.; De Luca, C.; Pino, N. A.; Pepe, S.; Convertito, V.; Zinno, I.; De Martino, P.; Bonano, M.; Giudicepietro, F.; Casu, F.; Macedonio, G.; Manunta, M.; Cardaci, C.; Manzo, M.; Di Bucci, D.; Solaro, G.; Zeni, G.; Lanari, R.; Bianco, F.; Tizzani, P.
2018-03-01
The causative source of the first damaging earthquake instrumentally recorded in the Island of Ischia, occurred on 21 August 2017, has been studied through a multiparametric geophysical approach. In order to investigate the source geometry and kinematics we exploit seismological, Global Positioning System, and Sentinel-1 and COSMO-SkyMed differential interferometric synthetic aperture radar coseismic measurements. Our results indicate that the retrieved solutions from the geodetic data modeling and the seismological data are plausible; in particular, the best fit solution consists of an E-W striking, south dipping normal fault, with its center located at a depth of 800 m. Moreover, the retrieved causative fault is consistent with the rheological stratification of the crust in this zone. This study allows us to improve the knowledge of the volcano-tectonic processes occurring on the Island, which is crucial for a better assessment of the seismic risk in the area.
NASA Astrophysics Data System (ADS)
De Vecchi, Daniele; Harb, Mostapha; Dell'Acqua, Fabio; Aurelio Galeazzo, Daniel
2015-04-01
Aim: The paper introduces an integrated set of open-source tools designed to process medium and high-resolution imagery with the aim to extract vulnerability indicators [1]. Problem: In the context of risk monitoring [2], a series of vulnerability proxies can be defined, such as the extension of a built-up area or buildings regularity [3]. Different open-source C and Python libraries are already available for image processing and geospatial information (e.g. OrfeoToolbox, OpenCV and GDAL). They include basic processing tools but not vulnerability-oriented workflows. Therefore, it is of significant importance to provide end-users with a set of tools capable to return information at a higher level. Solution: The proposed set of python algorithms is a combination of low-level image processing and geospatial information handling tools along with high-level workflows. In particular, two main products are released under the GPL license: source code, developers-oriented, and a QGIS plugin. These tools were produced within the SENSUM project framework (ended December 2014) where the main focus was on earthquake and landslide risk. Further development and maintenance is guaranteed by the decision to include them in the platform designed within the FP 7 RASOR project . Conclusion: With the lack of a unified software suite for vulnerability indicators extraction, the proposed solution can provide inputs for already available models like the Global Earthquake Model. The inclusion of the proposed set of algorithms within the RASOR platforms can guarantee support and enlarge the community of end-users. Keywords: Vulnerability monitoring, remote sensing, optical imagery, open-source software tools References [1] M. Harb, D. De Vecchi, F. Dell'Acqua, "Remote sensing-based vulnerability proxies in the EU FP7 project SENSUM", Symposium on earthquake and landslide risk in Central Asia and Caucasus: exploiting remote sensing and geo-spatial information management, 29-30th January 2014, Bishkek, Kyrgyz Republic. [2] UNISDR, "Living with Risk", Geneva, Switzerland, 2004. [3] P. Bisch, E. Carvalho, H. Degree, P. Fajfar, M. Fardis, P. Franchin, M. Kreslin, A. Pecker, "Eurocode 8: Seismic Design of Buildings", Lisbon, 2011. (SENSUM: www.sensum-project.eu, grant number: 312972 ) (RASOR: www.rasor-project.eu, grant number: 606888 )
Quantitative Earthquake Prediction on Global and Regional Scales
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir G.
2006-03-01
The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and for mega-earthquakes of M9.0+. The monitoring at regional scales may require application of a recently proposed scheme for the spatial stabilization of the intermediate-term middle-range predictions. The scheme guarantees a more objective and reliable diagnosis of times of increased probability and is less restrictive to input seismic data. It makes feasible reestablishment of seismic monitoring aimed at prediction of large magnitude earthquakes in Caucasus and Central Asia, which to our regret, has been discontinued in 1991. The first results of the monitoring (1986-1990) were encouraging, at least for M6.5+.
NASA Astrophysics Data System (ADS)
Carvajal, M.; Cisternas, M.; Catalán, P. A.
2017-05-01
Historical records of an earthquake that occurred in 1730 affecting Metropolitan Chile provide essential clues on the source characteristics for the future earthquakes in the region. The earthquake and tsunami of 1730 have been recognized as the largest to occur in Metropolitan Chile since the beginning of written history. The earthquake destroyed buildings along >1000 km of the coast and produced a large tsunami that caused damage as far as Japan. Here its source characteristics are inferred by comparing local tsunami inundations computed from hypothetical earthquakes with varying magnitude and depth, with those inferred from historical observations. It is found that a 600-800 km long rupture involving average slip amounts of 10-14 m (Mw 9.1-9.3) best explains the observed tsunami heights and inundations. This large earthquake magnitude is supported by the 1730 tsunami heights inferred in Japan. The inundation results combined with local uplift reports suggest a southward increase of the slip depth along the rupture zone of the 1730 earthquake. While shallow slip on the area to the north of the 2010 earthquake rupture zone is required to explain the reported inundation, only deeper slip at this area can explain the coastal uplift reports. Since the later earthquakes of the region involved little or no slip at shallow depths, the near-future earthquakes on Metropolitan Chile could release the shallow slip accumulated since 1730 and thus lead to strong tsunami excitation. Moderate shaking from a shallow earthquake could delay tsunami evacuation for the most populated coastal region of Chile.
NASA Astrophysics Data System (ADS)
Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.
2015-12-01
On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an average stress drop of 5.7 MPa in the aftershock zone compared to values of 2.4 and 2.9 MPa in the foreshock and swarm zones respectively. The higher stress drops in the rupture patch compared to the rupture barriers reflect systematic differences in along strike fault zone properties on Gofar transform fault.
NASA Astrophysics Data System (ADS)
Nakahara, H.
2003-12-01
The 2003 Miyagi-Oki earthquake (M 7.0) took place on May 26, 2003 in the subducting Pacific plate beneath northeastern Japan. The focal depth is around 70km. The focal mechanism is reverse type on a fault plane dipping to the west with a high angle. There was no fatality, fortunately. However, this earthquake caused more than 100 injures, 2000 collapsed houses, and so on. To the south of this focal area by about 50km, an interplate earthquake of M7.5, the Miyagi-Ken-Oki earthquake, is expected to occur in the near future. So the relation between this earthquake and the expected Miyagi-Ken-Oki earthquake attracts public attention. Seismic-energy distribution on earthquake fault planes estimated by envelope inversion analyses can contribute to better understanding of the earthquake source process. For moderate to large earthquakes, seismic energy in frequencies higher than 1 Hz is sometimes much larger than a level expected from the omega-squared model with source parameters estimated by lower-frequency analyses. Therefore, an accurate estimation of seismic energy in such high frequencies has significant importance on estimation of dynamic source parameters such as the seismic energy or the apparent stress. In this study, we execute an envelope inversion analysis based on the method by Nakahara et al. (1998) and clarify the spatial distribution of high-frequency seismic energy radiation on the fault plane of this earthquake. We use three-component sum of mean squared velocity seismograms multiplied by a density of earth medium, which is called envelopes here, for the envelope inversion analysis. Four frequency bands of 1-2, 2-4, 4-8, and 8-16 Hz are adopted. We use envelopes in the time window from the onset of S waves to the lapse time of 51.2 sec. Green functions of envelopes representing the energy propagation process through a scattering medium are calculated based on the radiative transfer theory, which are characterized by parameters of scattering attenuation and intrinsic absorption. We use the values obtained for the northeastern Japan (Sakurai, 1995). We assume the fault plane as follows: strike=193,a, dip=69,a, rake=87,a, length=30km, width=25km with referrence to a waveform inversion analysis in low-frequencies (e.g. Yagi, 2003). We divide this fault plane into 25 subfaults, each of which is a 5km x 5km square. Rupture velocity is assumed to be constant. Seismic energy is radiated from a point source as soon as the rupture front passes the center of each subfault. Time function of energy radiation is assumed as a box-car function. The amount of seismic energy from all the subfaults and site amplification factors for all the stations are estimated by the envelope inversion method. Rupture velocity and the duration time of a box-car function should be estimated by a grid search. Theoretical envelopes calculated with best-fit parameters generally fit to observed ones. The rupture velocity and duration time were estimated as 3.8 km/s and 1.6 sec, respectively. The high-frequency seismic energy was found to be radiated mainly from two spots on the fault plane: The first one is around the initial rupture point and the second is the northern part of the fault plane. These two spots correspond to observed two peaks on envelopes. Amount of seismic energy increases with increasing frequency in the 1-16Hz band, which contradicts an expectation from the omega-squared model. Therefore, stronger radiation of higher-frequency seismic energy is a prominent character of this earthquake. Acknowledgements: We used strong-motion seismograms recorded by the K-NET and KiK-net of NIED, JAPAN.
An updated stress map of the continental United States reveals heterogeneous intraplate stress
NASA Astrophysics Data System (ADS)
Levandowski, Will; Herrmann, Robert B.; Briggs, Rich; Boyd, Oliver; Gold, Ryan
2018-06-01
Knowledge of the state of stress in Earth's crust is key to understanding the forces and processes responsible for earthquakes. Historically, low rates of natural seismicity in the central and eastern United States have complicated efforts to understand intraplate stress, but recent improvements in seismic networks and the spread of human-induced seismicity have greatly improved data coverage. Here, we compile a nationwide stress map based on formal inversions of focal mechanisms that challenges the idea that deformation in continental interiors is driven primarily by broad, uniform stress fields derived from distant plate boundaries. Despite plate-boundary compression, extension dominates roughly half of the continent, and second-order forces related to lithospheric structure appear to control extension directions. We also show that the states of stress in several active eastern United States seismic zones differ significantly from those of surrounding areas and that these anomalies cannot be explained by transient processes, suggesting that earthquakes are focused by persistent, locally derived sources of stress. Such spatially variable intraplate stress appears to justify the current, spatially variable estimates of seismic hazard. Future work to quantify sources of stress, stressing-rate magnitudes and their relationship with strain and earthquake rates could allow prospective mapping of intraplate hazard.
Earthquake nucleation on faults with rate-and state-dependent strength
Dieterich, J.H.
1992-01-01
Dieterich, J.H., 1992. Earthquake nucleation on faults with rate- and state-dependent strength. In: T. Mikumo, K. Aki, M. Ohnaka, L.J. Ruff and P.K.P. Spudich (Editors), Earthquake Source Physics and Earthquake Precursors. Tectonophysics, 211: 115-134. Faults with rate- and state-dependent constitutive properties reproduce a range of observed fault slip phenomena including spontaneous nucleation of slip instabilities at stresses above some critical stress level and recovery of strength following slip instability. Calculations with a plane-strain fault model with spatially varying properties demonstrate that accelerating slip precedes instability and becomes localized to a fault patch. The dimensions of the fault patch follow scaling relations for the minimum critical length for unstable fault slip. The critical length is a function of normal stress, loading conditions and constitutive parameters which include Dc, the characteristic slip distance. If slip starts on a patch that exceeds the critical size, the length of the rapidly accelerating zone tends to shrink to the characteristic size as the time of instability approaches. Solutions have been obtained for a uniform, fixed-patch model that are in good agreement with results from the plane-strain model. Over a wide range of conditions, above the steady-state stress, the logarithm of the time to instability linearly decreases as the initial stress increases. Because nucleation patch length and premonitory displacement are proportional to Dc, the moment of premonitory slip scales by D3c. The scaling of Dc is currently an open question. Unless Dc for earthquake faults is significantly greater than that observed on laboratory faults, premonitory strain arising from the nucleation process for earthquakes may by too small to detect using current observation methods. Excluding the possibility that Dc in the nucleation zone controls the magnitude of the subsequent earthquake, then the source dimensions of the smallest earthquakes in a region provide an upper limit for the size of the nucleation patch. ?? 1992.
e-Science on Earthquake Disaster Mitigation by EUAsiaGrid
NASA Astrophysics Data System (ADS)
Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong
2010-05-01
Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the earthquake wave propagation to tsunami mitigation would be feasible once the user community support is in place.
NASA Astrophysics Data System (ADS)
Ichinose, Gene Aaron
The source parameters for eastern California and western Nevada earthquakes are estimated from regionally recorded seismograms using a moment tensor inversion. We use the point source approximation and fit the seismograms, at long periods. We generated a moment tensor catalog for Mw > 4.0 since 1997 and Mw > 5.0 since 1990. The catalog includes centroid depths, seismic moments, and focal mechanisms. The regions with the most moderate sized earthquakes in the last decade were in aftershock zones located in Eureka Valley, Double Spring Flat, Coso, Ridgecrest, Fish Lake Valley, and Scotty's Junction. The remaining moderate size earthquakes were distributed across the region. The 1993 (Mw 6.0) Eureka Valley earthquake occurred in the Eastern California Shear Zone. Careful aftershock relocations were used to resolve structure from aftershock clusters. The mainshock appears to rupture along the western side of the Last Change Range along a 30° to 60° west dipping fault plane, consistent with previous geodetic modeling. We estimate the source parameters for aftershocks at source-receiver distances less than 20 km using waveform modeling. The relocated aftershocks and waveform modeling results do not indicate any significant evidence of low angle faulting (dips > 30°. The results did reveal deformation along vertical faults within the hanging-wall block, consistent with observed surface rupture along the Saline Range above the dipping fault plane. The 1994 (Mw 5.8) Double Spring Flat earthquake occurred along the eastern Sierra Nevada between overlapping normal faults. Aftershock migration and cross fault triggering occurred in the following two years, producing seventeen Mw > 4 aftershocks The source parameters for the largest aftershocks were estimated from regionally recorded seismograms using moment tensor inversion. We estimate the source parameters for two moderate sized earthquakes which occurred near Reno, Nevada, the 1995 (Mw 4.4) Border Town, and the 1998 (Mw 4.7) Incline Village Earthquakes. We test to see how such stress interactions affected a cluster of six large earthquakes (Mw 6.6 to 7.5) between 1915 to 1954 within the Central Nevada Seismic Belt. We compute the static stress changes for these earthquake using dislocation models based on the location and amount of surface rupture. (Abstract shortened by UMI.)
Landslide maps and seismic noise: Rockmass weakening caused by shallow earthquakes
NASA Astrophysics Data System (ADS)
Uchida, Tara; Marc, Odin; Sens-Schönfelder, Christoph; Sawazaki, Kaoru; Hobiger, Manuel; Hovius, Niels
2015-04-01
Some studies have suggested that the shaking and deformation associated with earthquake would result in a temporary increased hillslope erodibility. However very few data have been able to clarify such effect. We present integrated geomorphic data constraining an elevated landslide rate following 4 continental shallow earthquakes, the Mw 6.9 Finisterre (1993), the Mw 7.6 ChiChi (1999), the Mw 6.6 Niigata (2004) and the Mw 6.8 Iwate-Miyagi (2008) earthquakes. We constrained the magnitude, the recovery time and somewhat the mechanism at the source of this higher landslide risk. We provide some evidences excluding aftershocks or rain forcing intensity as possible mechanism and leaving subsurface weakening as the most likely. The landslide data suggest that this ground strength weakening is not limited to the soil cover but also affect the shallow bedrock. Additionally, we used ambient noise autocorrelation techniques to monitor shallow subsurface seismic velocity within the epicentral area of three of those earthquakes. For most stations we observe a velocity drop followed by a recovery processes of several years in fair agreement with the recovery time estimated based on landslide observation. Thus a common processes could alter the strength of the first 10m of soil/rock and simultaneously drive the landslide rate increase and the seismic velocity drop. The ability to firmly demonstrate this link require additional constraints on the seismic signal interpretation but would provide a very useful tool for post-earthquake risk managment.
Source characteristics of the Nicaraguan tsunami earthquake of September 2, 1992
NASA Astrophysics Data System (ADS)
Ide, Satoshi; Imamura, Fumihiko; Yoshida, Yasuhiro; Abe, Katsuyuki
1993-05-01
The source mechanisms of the Nicaraguan tsunami earthquake of September 2, 1992 is studied via waveforms of body waves and surface waves recorded on global broadband seismographs. The possibility of a single force is ruled out from radiation patterns and the amplitude ratio of Rayleigh and Love waves. The main shock is interpreted as low-angle thrust fault with strike of 302 deg, dip of 16 deg, and slip of 87 deg, the Cocos plate underthrusting beneath the Caribbean plate. The seismic moment from surface wave analysis is 3.0 x 10 exp 20 Nm. The source dimension is estimated to be 200 x 100 km from the aftershock area. The inversion results of body waves suggest bilateral rupture with rupture velocity as low as 1.5 km/s and duration time of about 100 s. The source process time is unusually long, from which it is inferred that the associated crustal deformation has a long time constant.
Feasibility Study of Earthquake Early Warning in Hawai`i For the Mauna Kea Thirty Meter Telescope
NASA Astrophysics Data System (ADS)
Okubo, P.; Hotovec-Ellis, A. J.; Thelen, W. A.; Bodin, P.; Vidale, J. E.
2014-12-01
Earthquakes, including large damaging events, are as central to the geologic evolution of the Island of Hawai`i as its more famous volcanic eruptions and lava flows. Increasing and expanding development of facilities and infrastructure on the island continues to increase exposure and risk associated with strong ground shaking resulting from future large local earthquakes. Damaging earthquakes over the last fifty years have shaken the most heavily developed areas and critical infrastructure of the island to levels corresponding to at least Modified Mercalli Intensity VII. Hawai`i's most recent damaging earthquakes, the M6.7 Kiholo Bay and M6.0 Mahukona earthquakes, struck within seven minutes of one another off of the northwest coast of the island in October 2006. These earthquakes resulted in damage at all thirteen of the telescopes near the summit of Mauna Kea that led to gaps in telescope operations ranging from days up to four months. With the experiences of 2006 and Hawai`i's history of damaging earthquakes, we have begun a study to explore the feasibility of implementing earthquake early warning systems to provide advanced warnings to the Thirty Meter Telescope of imminent strong ground shaking from future local earthquakes. One of the major challenges for earthquake early warning in Hawai`i is the variety of earthquake sources, from shallow crustal faults to deeper mantle sources, including the basal decollement separating the volcanic pile from the ancient oceanic crust. Infrastructure on the Island of Hawai`i may only be tens of kilometers from these sources, allowing warning times of only 20 s or less. We assess the capability of the current seismic network to produce alerts for major historic earthquakes, and we will provide recommendations for upgrades to improve performance.
New Methodologies Applied to Seismic Hazard Assessment in Southern Calabria (Italy)
NASA Astrophysics Data System (ADS)
Console, R.; Chiappini, M.; Speranza, F.; Carluccio, R.; Greco, M.
2016-12-01
Although it is generally recognized that the M7+ 1783 and 1908 Calabria earthquakes were caused by normal faults rupturing the upper crust of the southern Calabria-Peloritani area, no consensus exists on seismogenic source location and orientation. A recent high-resolution low-altitude aeromagnetic survey of southern Calabria and Messina straits suggested that the sources of the 1783 and 1908 earthquakes are en echelon faults belonging to the same NW dipping normal fault system straddling the whole southern Calabria. The application of a newly developed physics-based earthquake simulator to the active fault system modeled by the data obtained from the aeromagnetic survey and other recent geological studies has allowed the production of catalogs lasting 100,000 years and containing more than 25,000 events of magnitudes ≥ 4.0. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate due to tectonic loading for every single segment in the investigated fault system, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one segment are allowed to expand into neighboring segments, if they are separated by a given maximum range of distance. The application of our simulation algorithm to Calabria region provides typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. These maps can be compared with the existing hazard maps that are presently used in the national seismic building regulations.
The effects of core-reflected waves on finite fault inversions with teleseismic body wave data
NASA Astrophysics Data System (ADS)
Qian, Yunyi; Ni, Sidao; Wei, Shengji; Almeida, Rafael; Zhang, Han
2017-11-01
Teleseismic body waves are essential for imaging rupture processes of large earthquakes. Earthquake source parameters are usually characterized by waveform analyses such as finite fault inversions using only turning (direct) P and SH waves without considering the reflected phases from the core-mantle boundary (CMB). However, core-reflected waves such as ScS usually have amplitudes comparable to direct S waves due to the total reflection from the CMB and might interfere with the S waves used for inversion, especially at large epicentral distances for long duration earthquakes. In order to understand how core-reflected waves affect teleseismic body wave inversion results, we develop a procedure named Multitel3 to compute Green's functions that contain turning waves (direct P, pP, sP, direct S, sS and reverberations in the crust) and core-reflected waves (PcP, pPcP, sPcP, ScS, sScS and associated reflected phases from the CMB). This ray-based method can efficiently generate synthetic seismograms for turning and core-reflected waves independently, with the flexibility to take into account the 3-D Earth structure effect on the timing between these phases. The performance of this approach is assessed through a series of numerical inversion tests on synthetic waveforms of the 2008 Mw7.9 Wenchuan earthquake and the 2015 Mw7.8 Nepal earthquake. We also compare this improved method with the turning-wave only inversions and explore the stability of the new procedure when there are uncertainties in a priori information (such as fault geometry and epicentre location) or arrival time of core-reflected phases. Finally, a finite fault inversion of the 2005 Mw8.7 Nias-Simeulue earthquake is carried out using the improved Green's functions. Using enhanced Green's functions yields better inversion results as expected. While the finite source inversion with conventional P and SH waves is able to recover large-scale characteristics of the earthquake source, by adding PcP and ScS phases, the inverted slip model and moment rate function better match previous results incorporating field observations, geodetic and seismic data.
Monitoring Seismic Velocity Change to Explore the Earthquake Seismogenic Structures
NASA Astrophysics Data System (ADS)
Liao, C. F.; Wen, S.; Chen, C.
2017-12-01
Studying spatial-temporal variations of subsurface velocity structures is still a challenge work, but it can provide important information not only on geometry of a fault, but also the rheology change induced from the strong earthquake. In 1999, a disastrous Chi-Chi earthquake (Mw7.6; Chi-Chi EQ) occurred in central Taiwan and caused great impacts on Taiwan's society. Therefore, the major objective of this research is to investigate whether the rheology change of fault can be associated with seismogenic process before strong earthquake. In addition, after the strike of the Chi-Chi EQ, whether the subsurface velocity structure resumes to its steady state is another issue in this study. Therefore, for the above purpose, we have applied a 3D tomographic technique to obtain P- and S-wave velocity structures in central Taiwan using travel time data provided by the Central Weather Bureau (CWB). One major advantage of this method is that we can include out-of-network data to improve the resolution of velocity structures at deeper depths in our study area. The results show that the temporal variations of Vp are less significant than Vs (or Vp/Vs ratio), and Vp is not prominent perturbed before and after the occurrence of the Chi-Chi EQ. However, the Vs (or Vp/Vs ratio) structure in the source area demonstrates significant spatial-temporal difference before and after the mainshock. From the results, before the mainshock, Vs began to decrease (Vp/Vs ratio was increased as well) at the hanging wall of Chelungpu fault, which may be induced by the increasing density of microcracks and fluid. But in the vicinities of Chi-Chi Earthquake's source area, Vs was increasing (Vp/Vs ratio was also decreased). This phenomenon may be owing to the closing of cracks or migration of fluid. Due to the different physical characteristics around the source area, strong earthquake may be easily nucleated at the junctional zone. Our findings suggest that continuously monitoring the Vp and Vs (or Vp/Vs ratio) structures in high seismic potential zones is an important task which can lead to reduce seismic hazard for a future large earthquake.
Near-Source Shaking and Dynamic Rupture in Plastic Media
NASA Astrophysics Data System (ADS)
Gabriel, A.; Mai, P. M.; Dalguer, L. A.; Ampuero, J. P.
2012-12-01
Recent well recorded earthquakes show a high degree of complexity at the source level that severely affects the resulting ground motion in near and far-field seismic data. In our study, we focus on investigating source-dominated near-field ground motion features from numerical dynamic rupture simulations in an elasto-visco-plastic bulk. Our aim is to contribute to a more direct connection from theoretical and computational results to field and seismological observations. Previous work showed that a diversity of rupture styles emerges from simulations on faults governed by velocity-and-state-dependent friction with rapid velocity-weakening at high slip rate. For instance, growing pulses lead to re-activation of slip due to gradual stress build-up near the hypocenter, as inferred in some source studies of the 2011 Tohoku-Oki earthquake. Moreover, off-fault energy dissipation implied physical limits on extreme ground motion by limiting peak slip rate and rupture velocity. We investigate characteristic features in near-field strong ground motion generated by dynamic in-plane rupture simulations. We present effects of plasticity on source process signatures, off-fault damage patterns and ground shaking. Independent of rupture style, asymmetric damage patterns across the fault are produced that contribute to the total seismic moment, and even dominantly at high angles between the fault and the maximum principal background stress. The off-fault plastic strain fields induced by transitions between rupture styles reveal characteristic signatures of the mechanical source processes during the transition. Comparing different rupture styles in elastic and elasto-visco-plastic media to identify signatures of off-fault plasticity, we find varying degrees of alteration of near-field radiation due to plastic energy dissipation. Subshear pulses suffer more peak particle velocity reduction due to plasticity than cracks. Supershear ruptures are affected even more. The occurrence of multiple rupture fronts affect seismic potency release rate, amplitude spectra, peak particle velocity distributions and near-field seismograms. Our simulations enable us to trace features of source processes in synthetic seismograms, for example exhibiting a re-activation of slip. Such physical models may provide starting points for future investigations of field properties of earthquake source mechanisms and natural fault conditions. In the long-term, our findings may be helpful for seismic hazard analysis and the improvement of seismic source models.
NASA Astrophysics Data System (ADS)
Stevens, Victoria
2017-04-01
The 2015 Gorkha-Nepal M7.8 earthquake (hereafter known simply as the Gorkha earthquake) highlights the seismic risk in Nepal, allows better characterization of the geometry of the Main Himalayan Thrust (MHT), and enables comparison of recorded ground-motions with predicted ground-motions. These new data, together with recent paleoseismic studies and geodetic-based coupling models, allow for good parameterization of the fault characteristics. Other faults in Nepal remain less well studied. Unlike previous PSHA studies in Nepal that are exclusively area-based, we use a mix of faults and areas to describe six seismic sources in Nepal. For each source, the Gutenberg-Richter a and b values are found, and the maximum magnitude earthquake estimated, using a combination of earthquake catalogs, moment conservation principals and similarities to other tectonic regions. The MHT and Karakoram fault are described as fault sources, whereas four other sources - normal faulting in N-S trending grabens of northern Nepal, strike-slip faulting in both eastern and western Nepal, and background seismicity - are described as area sources. We use OpenQuake (http://openquake.org/) to carry out the analysis, and peak ground acceleration (PGA) at 2 and 10% chance in 50 years is found for Nepal, along with hazard curves at various locations. We compare this PSHA model with previous area-based models of Nepal. The Main Himalayan Thrust is the principal seismic hazard in Nepal so we study the effects of changing several parameters associated with this fault. We compare ground shaking predicted from various fault geometries suggested from the Gorkha earthquake with each other, and with a simple model of a flat fault. We also show the results from incorporating a coupling model based on geodetic data and microseismicity, which limits the down-dip extent of rupture. There have been no ground-motion prediction equations (GMPEs) developed specifically for Nepal, so we compare the results of standard GMPEs used together with an earthquake-scenario representing that of the Gorkha earthquake, with actual data from the Gorkha earthquake itself. The Gorkha earthquake also highlighted the importance of basin-, topographic- and directivity-effects, and the location of high-frequency sources, on influencing ground motion. Future study aims at incorporating the above, together with consideration of the fault-rupture history and its influence on the location and timing of future earthquakes.
New seismic sources parameterization in El Salvador. Implications to seismic hazard.
NASA Astrophysics Data System (ADS)
Alonso-Henar, Jorge; Staller, Alejandra; Jesús Martínez-Díaz, José; Benito, Belén; Álvarez-Gómez, José Antonio; Canora, Carolina
2014-05-01
El Salvador is located at the pacific active margin of Central America, here, the subduction of the Cocos Plate under the Caribbean Plate at a rate of ~80 mm/yr is the main seismic source. Although the seismic sources located in the Central American Volcanic Arc have been responsible for some of the most damaging earthquakes in El Salvador. The El Salvador Fault Zone is the main geological structure in El Salvador and accommodates 14 mm/yr of horizontal displacement between the Caribbean Plate and the forearc sliver. The ESFZ is a right lateral strike-slip fault zone c. 150 km long and 20 km wide .This shear band distributes the deformation among strike-slip faults trending N90º-100ºE and secondary normal faults trending N120º- N170º. The ESFZ is relieved westward by the Jalpatagua Fault and becomes less clear eastward disappearing at Golfo de Fonseca. Five sections have been proposed for the whole fault zone. These fault sections are (from west to east): ESFZ Western Section, San Vicente Section, Lempa Section, Berlin Section and San Miguel Section. Paleoseismic studies carried out in the Berlin and San Vicente Segments reveal an important amount of quaternary deformation and paleoearthquakes up to Mw 7.6. In this study we present 45 capable seismic sources in El Salvador and their preliminary slip-rate from geological and GPS data. The GPS data detailled results are presented by Staller et al., 2014 in a complimentary communication. The calculated preliminary slip-rates range from 0.5 to 8 mm/yr for individualized faults within the ESFZ. We calculated maximum magnitudes from the mapped lengths and paleoseismic observations.We propose different earthquakes scenario including the potential combined rupture of different fault sections of the ESFZ, resulting in maximum earthquake magnitudes of Mw 7.6. We used deterministic models to calculate acceleration distribution related with maximum earthquakes of the different proposed scenario. The spatial distribution of seismic accelerations are compared and calibrated using the February 13, 2001 earthquake, as control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917. control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917.
Do weak global stresses synchronize earthquakes?
NASA Astrophysics Data System (ADS)
Bendick, R.; Bilham, R.
2017-08-01
Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.
Source Mechanisms of Destructive Tsunamigenic Earthquakes occurred along the Major Subduction Zones
NASA Astrophysics Data System (ADS)
Yolsal-Çevikbilen, Seda; Taymaz, Tuncay; Ulutaş, Ergin
2016-04-01
Subduction zones, where an oceanic plate is subducted down into the mantle by tectonic forces, are potential tsunami locations. Many big, destructive and tsunamigenic earthquakes (Mw > 7.5) and high amplitude tsunami waves are observed along the major subduction zones particularly near Indonesia, Japan, Kuril and Aleutan Islands, Gulf of Alaska, Southern America. Not all earthquakes are tsunamigenic; in order to generate a tsunami, the earthquake must occur under or near the ocean, be large, and create significant vertical movements of the seafloor. It is also known that tsunamigenic earthquakes release their energy over a couple of minutes, have long source time functions and slow-smooth ruptures. In this study, we performed point-source inversions by using teleseismic long-period P- and SH- and broad-band P-waveforms recorded by the Federation of Digital Seismograph Networks (FDSN) and the Global Digital Seismograph Network (GDSN) stations. We obtained source mechanism parameters and finite-fault slip distributions of recent destructive ten earthquakes (Mw ≥ 7.5) by comparing the shapes and amplitudes of long period P- and SH-waveforms, recorded in the distance range of 30° - 90°, with synthetic waveforms. We further obtained finite-fault rupture histories of those earthquakes to determine the faulting area (fault length and width), maximum displacement, rupture duration and stress drop. We applied a new back-projection method that uses teleseismic P-waveforms to integrate the direct P-phase with reflected phases from structural discontinuities near the source, and customized it to estimate the spatio-temporal distribution of the seismic energy release of earthquakes. Inversion results exhibit that recent tsunamigenic earthquakes show dominantly thrust faulting mechanisms with small amount of strike-slip components. Their focal depths are also relatively shallow (h < 40 km). As an example, the September 16, 2015 Illapel (Chile) earthquake (Mw: 8.3; h: 26 km) reflects the major characteristics of the Peru-Chile subduction zone between the Nazca and South America Plates. The size, location, depth and focal mechanism of this earthquake are consistent with its occurrence on the megathrust interface in this region. This study is supported by the Scientific and Technological Research Council of Turkey (TUBITAK, Project No: CAYDAG - 114Y066).
NASA Astrophysics Data System (ADS)
Posada, G.; Trujillo, J. C., Sr.; Hoyos, C.; Monsalve, G.
2017-12-01
The tectonics setting of Colombia is determined by the interaction of Nazca, Caribbean and South American plates, together with the Panama-Choco block collision, which makes a seismically active region. Regional seismic monitoring is carried out by the National Seismological Network of Colombia and the Accelerometer National Network of Colombia. Both networks calculate locations, magnitudes, depths and accelerations, and other seismic parameters. The Medellín - Aburra Valley is located in the Northern segment of the Central Cordillera of Colombia, and according to the Colombian technical seismic norm (NSR-10), is a region of intermediate hazard, because of the proximity to seismic sources of the Valley. Seismic monitoring in the Aburra Valley began in 1996 with an accelerometer network which consisted of 38 instruments. Currently, the network consists of 26 stations and is run by the Early Warning System of Medellin and Aburra Valley (SIATA). The technical advances have allowed the real-time communication since a year ago, currently with 10 stations; post-earthquake data is processed through operationally near-real-time, obtaining quick results in terms of location, acceleration, spectrum response and Fourier analysis; this information is displayed at the SIATA web site. The strong motion database is composed by 280 earthquakes; this information is the basis for the estimation of seismic hazards and risk for the region. A basic statistical analysis of the main information was carried out, including the total recorded events per station, natural frequency, maximum accelerations, depths and magnitudes, which allowed us to identify the main seismic sources, and some seismic site parameters. With the idea of a more complete seismic monitoring and in order to identify seismic sources beneath the Valley, we are in the process of installing 10 low-cost shake seismometers for micro-earthquake monitoring. There is no historical record of earthquakes with a magnitude greater than 3.5 beneath the Aburra Valley, and the neotectonic evidence are limited, so it is expected that this network helps to characterize the seismic hazards.
Strong Ground Motion Generation during the 2011 Tohoku-Oki Earthquake
NASA Astrophysics Data System (ADS)
Asano, K.; Iwata, T.
2011-12-01
Strong ground motions during the 2011 Tohoku-Oki earthquake (Mw9.0) were densely observed by the strong motion observation networks all over Japan. Seeing the acceleration and velocity waveforms observed at strong stations in northeast Japan along the source region, those ground motions are characterized by plural wave packets with duration of about twenty seconds. Particularly, two wave packets separated by about fifty seconds could be found on the records in the northern part of the damaged area, whereas only one significant wave packets could be recognized on the records in the southern part of the damaged area. The record section shows four isolated wave packets propagating from different locations to north and south, and it gives us a hint of the strong motion generation process on the source fault which is related to the heterogeneous rupture process in the scale of tens of kilometers. In order to solve it, we assume that each isolated wave packet is contributed by the corresponding strong motion generation area (SMGA). It is a source patch whose slip velocity is larger than off the area (Miyake et al., 2003). That is, the source model of the 2011 Tohoku-Oki earthquake consists of four SMGAs. The SMGA source model has succeeded in reproducing broadband strong ground motions for past subduction-zone events (e.g., Suzuki and Iwata, 2007). The target frequency range is set to be 0.1-10 Hz in this study as this range is significantly related to seismic damage generation to general man-made structures. First, we identified the rupture starting points of each SMGA by picking up the onset of individual packets. The source fault plane is set following the GCMT solution. The first two SMGAs were located approximately 70 km and 30 km west of the hypocenter. The third and forth SMGAs were located approximately 160 km and 230 km southwest of the hypocenter. Then, the model parameters (size, rise time, stress drop, rupture velocity, rupture propagation pattern) of these four SMGAs were determined by waveform modeling using the empirical Green's function method (Irikura, 1986). The first and second SMGAs are located close to each other, and they are partially overlapped though the difference in the rupture time between them is more than 40 s. Those two SMGA appear to be included in the source region of the past repeating Miyagi-Oki subduction-zone event in 1936. The third and fourth SMGAs appear to be located in the source region of the past Fukushima-Oki events in 1938. Each of Those regions has been expected to cause next major earthquakes in the long-term evaluation. The obtained source model explains the acceleration, velocity, and displacement time histories in the target frequency range at most stations well. All of four SMGAs exist apparently outside of the large slip area along the trench east of the hypocenter, which was estimated by the seismic, geodetic, and tsunami inversion analyses, and this large slip zone near the trench does not contribute to strong motion much. At this point, we can conclude that the 2011 Tohoku-Oki earthquake has a possibility to be a complex event rupturing multiple preexisting asperities in terms of strong ground motion generation. It should be helpful to validate and improve the applicability of the strong motion prediction recipe for great subduction-zone earthquakes.
Yellowstone volcano-tectonic microseismic cycles constrain models of migrating volcanic fluids
NASA Astrophysics Data System (ADS)
Massin, F.; Farrell, J.; Smith, R. B.
2011-12-01
The objective of our research is to evaluate the source properties of extensive earthquake swarms in and around the 0.64Myr Yellowstone caldera, Yellowstone National Park, that is also the locus of widespread hydrothermal activity and ground deformation. We use earthquake waveforms data to investigate seismic wave multiplets that occur within discrete earthquake sequences. Waveform cross-correlation coefficients are computed from data acquired at six high quality stations that are merged from data of identical earthquakes into multiplets. Multiplets provide important indicators on the rupture process of the distinct seismogenic structures. Our multiplet database allowed evaluation of the seismic-source chronology from 1992 to 2010. We assess the evolution of micro-earthquake triggering by evaluating the evolution of earthquake rates and magnitudes. Some striking differences appear between two kinds of seismic swarms: 1) swarms with a high rate of repeating earthquakes of more than 200 events per day, and 2) swarms with a low rate of repeating earthquakes (less than 20 events per day). The 2010 Madison Plateau, western caldera, and the 2008-2009 Yellowstone Lake, eastern caldera, earthquake swarms are two examples representing respectively cascading relaxation of a uniform stress, and an example of highly concentrated stress perturbation induced by a migrating material. The repeating earthquake pattern methodology was then used to characterize the composition of the migrating material by modelling the migration time-space pattern with a experimental thermo-physical simulations of solidification of a fluid filled propagating dike. Comparison of our results with independent GPS deformation data suggests a most-likely model of rhyolitic-granitic magma intrusion along a vertical dike outlined by the pattern of earthquakes. The magma-hydrothermal mix was modeled with a temperature of 800°C-900°C and an average volumetric injection flux between 1.5 and 5 m3/s. Our interpretation is that the Yellowstone Lake swarm was caused by magma and hydrothermal fluids migrating laterally, at 1000 m per day, from ~12 km to 2 km depth and with the pattern of earthquake nucleation from south to north. The causative magmatic fluid came within a few km but did not reach the Earth's surface because of its low density contrast with the host rock. We also used multiplets for precise earthquake relocation using the P- and S-wave three-dimensional velocity models established previously for Yellowstone. Most of the repeating earthquakes are located in the northwestern part of the caldera and in the Hebgen Lake fault system, west of the caldera, that appear as the most active multiplet generator in Yellowstone. We are also evaluating multiplets for earthquake focal mechanism determinations and magmatic source property studies. The abnormal multiplets-triggering zone around the Hebgen Lake fault system, for example is also a research focus for multiplet stress simulation and we will present results on how multiplets can be used to investigate the volcano-tectonic stress interactions between the pre existing ~ 15 My Basin and Range normal faults and the superimposed effects of the 2 Mr Yellowstone volcanism on the pre-existing structures.
Comparison of aftershock sequences between 1975 Haicheng earthquake and 1976 Tangshan earthquake
NASA Astrophysics Data System (ADS)
Liu, B.
2017-12-01
The 1975 ML 7.3 Haicheng earthquake and the 1976 ML 7.8 Tangshan earthquake occurred in the same tectonic unit. There are significant differences in spatial-temporal distribution, number of aftershocks and time duration for the aftershock sequence followed by these two main shocks. As we all know, aftershocks could be triggered by the regional seismicity change derived from the main shock, which was caused by the Coulomb stress perturbation. Based on the rate- and state- dependent friction law, we quantitative estimated the possible aftershock time duration with a combination of seismicity data, and compared the results from different approaches. The results indicate that, aftershock time durations from the Tangshan main shock is several times of that form the Haicheng main shock. This can be explained by the significant relationship between aftershock time duration and earthquake nucleation history, normal stressand shear stress loading rateon the fault. In fact the obvious difference of earthquake nucleation history from these two main shocks is the foreshocks. 1975 Haicheng earthquake has clear and long foreshocks, while 1976 Tangshan earthquake did not have clear foreshocks. In that case, abundant foreshocks may mean a long and active nucleation process that may have changed (weakened) the rocks in the source regions, so they should have a shorter aftershock sequences for the reason that stress in weak rocks decay faster.
NASA Astrophysics Data System (ADS)
Lin, T. C.; Hu, F.; Chen, X.; Lee, S. J.; Hung, S. H.
2017-12-01
Kinematic source model is widely used for the simulation of an earthquake, because of its simplicity and ease of application. On the other hand, dynamic source model is a more complex but important tool that can help us to understand the physics of earthquake initiation, propagation, and healing. In this study, we focus on the southernmost Ryukyu Trench which is extremely close to northern Taiwan. Interseismic GPS data in northeast Taiwan shows a pattern of strain accumulation, which suggests the maximum magnitude of a potential future earthquake in this area is probably about magnitude 8.7. We develop dynamic rupture models for the hazard estimation of the potential megathrust event based on the kinematic rupture scenarios which are inverted using the interseismic GPS data. Besides, several kinematic source rupture scenarios with different characterized slip patterns are also considered to constrain the dynamic rupture process better. The initial stresses and friction properties are tested using the trial-and-error method, together with the plate coupling and tectonic features. An analysis of the dynamic stress field associated with the slip prescribed in the kinematic models can indicate possible inconsistencies with physics of faulting. Furthermore, the dynamic and kinematic rupture models are considered to simulate the ground shaking from based on 3-D spectral-element method. We analyze ShakeMap and ShakeMovie from the simulation results to evaluate the influence over the island between different source models. A dispersive tsunami-propagation simulation is also carried out to evaluate the maximum tsunami wave height along the coastal areas of Taiwan due to coseismic seafloor deformation of different source models. The results of this numerical simulation study can provide a physically-based information of megathrust earthquake scenario for the emergency response agency to take the appropriate action before the really big one happens.
Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes
NASA Astrophysics Data System (ADS)
Yamada, M.; Mori, J. J.
2009-12-01
Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.
The 2008 earthquakes in the Bavarian Molasse Basin - possible relation to deep geothermics?
NASA Astrophysics Data System (ADS)
Kraft, T.; Wassermann, J.; Deichmann, N.; Stange, S.
2009-04-01
We discuss several microearthquakes of magnitude up to Ml=2.3 that occurred in the Bavarian Molasse Basin (ByM), south of Munich, Germany, in February and July 2008. The strongest event was felt by local residents. The Bavarian Earthquake catalog, which dates back to the year 1000, does list a small number of isolated earthquakes in the western part of the ByM as well as a cluster of mining induced earthquakes (Peißenberg 1962-1970, I0(MSK)=5.5). The eastern part of the ByM, including the wider surrounding of Munich, was so far considered aseismic. Due to the spatio-temporal clustering of the microearthquakes in February and July 2008 the University of Munich (LMU) and the Swiss Seismologcical Service installed a temporal network of seismological stations in the south of Munich to investigate the newly arising seismicity. First analysis of the recorded data indicate shallow source depths (~5km) for the July events. This result is supported by the fact that one of these very small earthquakes was felt by local residents. The earthquakes hypocenters are located closely to a number of deep geothermal wells of 3-4.5km depth being either in production or running productivity tests in late 2007 and early 2008. Therefore, the 2008 seimicity might represent a case of induced seimicity related to the injection or withdrawal of water from the hydrothermal aquifer. Due to the lack of high quality recordings of a denser seismic monitoring network in the source area it is not possible to resolve details of the processes behind the 2008 seismicity. Therefore, a definite answer to the question if the earthquakes are related the deep geothermal projects or not can not be given at present. However, a number of recent well-studied cases have proved that earthquakes can also happen in depths much shallower than 5km, and that small changes of the hydrological conditions at depth are sufficient to trigger seismicity. Therefore, a detailed understanding of the causative processes behind the 2008 seismicity in the ByM is of paramount importance to hazard assessment and mitigation associated with similar geothermal projects underway elsewhere. A close cooperation of operators and developers of geothermal projects with earthquake science has proved to be very beneficial in the development of the Hot-Dry-Rock technique and is also highly desirable in developing strategies for the save geothermal use of deep hydrothermal aquifers.
Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.; Dunham, Eric M.
2017-09-01
Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω-2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology
NASA Astrophysics Data System (ADS)
Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.
2017-12-01
Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We propose to construct exploration-type trace gathers, systematically investigate the potential to improve the quality and regularity of realistic synthetic earthquake data and present attempts at separating transmitted and back-scattered wave fields for the improved imaging of Earth's large-scale discontinuities.
NASA Astrophysics Data System (ADS)
Ruppert, N. A.; Zabelina, I.; Freymueller, J. T.
2013-12-01
Saint Elias Mountains in southern Alaska are manifestation of ongoing tectonic processes that include collision of the Yakutat block with and subduction of the Yakutat block and Pacific plate under the North American plate. Interaction of these tectonic blocks and plates is complex and not well understood. In 2005 and 2006 a network of 22 broadband seismic sites was installed in the region as part of the SainT Elias TEctonics and Erosion Project (STEEP), a five-year multi-disciplinary study that addressed evolution of the highest coastal mountain range on Earth. High quality seismic data provides unique insights into earthquake occurrence and velocity structure of the region. Local earthquake data recorded between 2005 and 2010 became a foundation for detailed study of seismotectonic features and crustal velocities. The highest concentration of seismicity follows the Chugach-St.Elias fault, a major on land tectonic structure in the region. This fault is also delineated in tomographic images as a distinct contrast between lower velocities to the south and higher velocities to the north. The low-velocity region corresponds to the rapidly-uplifted and exhumed sediments on the south side of the range. Earthquake source parameters indicate high degree of compression and undertrusting processes along the coastal area, consistent with multiple thrust structures mapped from geological studies in the region. Tomographic inversion reveals velocity anomalies that correlate with sedimentary basins, volcanic features and subducting Yakutat block. We will present precise earthquake locations and source parameters recorded with the STEEP and regional seismic network along with the results of P- and S-wave tomographic inversion.
New ideas about the physics of earthquakes
NASA Astrophysics Data System (ADS)
Rundle, John B.; Klein, William
1995-07-01
It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.
Detection of postseismic fault-zone collapse following the Landers earthquake
NASA Astrophysics Data System (ADS)
Massonnet, Didier; Thatcher, Wayne; Vadon, Hélèna
1996-08-01
STRESS changes caused by fault movement in an earthquake induce transient aseismic crustal movements in the earthquake source region that continue for months to decades following large events1-4. These motions reflect aseismic adjustments of the fault zone and/or bulk deformation of the surroundings in response to applied stresses2,5-7, and supply information regarding the inelastic behaviour of the Earth's crust. These processes are imperfectly understood because it is difficult to infer what occurs at depth using only surface measurements2, which are in general poorly sampled. Here we push satellite radar interferometry to near its typical artefact level, to obtain a map of the postseismic deformation field in the three years following the 28 June 1992 Landers, California earthquake. From the map, we deduce two distinct types of deformation: afterslip at depth on the fault that ruptured in the earthquake, and shortening normal to the fault zone. The latter movement may reflect the closure of dilatant cracks and fluid expulsion from a transiently over-pressured fault zone6-8.
NASA Astrophysics Data System (ADS)
Wen, Strong; Chang, Yi-Zen; Yeh, Yu-Lien; Wen, Yi-Ying
2017-04-01
Due to the complicated geomorphology and geological conditions, the southwest (SW) Taiwan suffers the invasion of various natural disasters, such as landslide, mud flow and especially the threat of strong earthquakes as result of convergence between the Eurasian and the Philippine Sea plate. Several disastrous earthquakes had occurred in this area and often caused serious hazards. Therefore, it is fundamentally important to understand the correlation between seismic activity and seismogenic structures in SW Taiwan. Previous studies have indicated that before the failure of rock strength, the behaviors of micro-earthquakes can provide essential clues to help investigating the process of rock deformation. Thus, monitoring the activity of micro-earthquakes plays an important role in studying fault rupture or crustal deformation before the occurrence of a large earthquake. Because the time duration of micro-earthquakes activity can last for years, this phenomenon can be used to indicate the change of physical properties in the crust, such as crustal stress changes or fluid migration. The main purpose of this research is to perform a nonlinear waveform inversion to investigate source parameters of micro-earthquakes which include the non-double couple components owing to the shear rupture usually associated with complex morphology as well as tectonic fault systems. We applied a nonlinear waveform procedure to investigate local stress status and source parameters of micro-earthquakes that occurred in SW Taiwan. Previous studies has shown that microseismic fracture behaviors were controlled by the non-double components, which could lead to cracks generating and fluid migration, which can result in changing rock volume and produce partial compensation. Our results not only giving better understanding the seismogenic structures in the SW Taiwan, but also allowing us to detect variations of physical parameters caused by crack propagating in stratum. Thus, the derived source parameters can serve as a detail physical status (such as fluid migration, fault geometry and the pressure of the leading edge of the rupturing) to investigate the characteristics of seismongenic structures more precisely. In addition, the obtained regional stress field in this study also used to assure and to exam the tectonic models proposed for SW Taiwan previously, which will help to properly assess seismic hazard analysis for major engineering construction projects in the urban area.
Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA
NASA Astrophysics Data System (ADS)
Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.
2017-12-01
Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.
NASA Astrophysics Data System (ADS)
Murotani, S.; Satake, K.
2017-12-01
Off Fukushima region, Mjma 7.4 (event A) and 6.9 (event B) events occurred on November 6, 1938, following the thrust fault type earthquakes of Mjma 7.5 and 7.3 on the previous day. These earthquakes were estimated as normal fault earthquakes by Abe (1977, Tectonophysics). An Mjma 7.0 earthquake occurred on July 12, 2014 near event B and an Mjma 7.4 earthquake occurred on November 22, 2016 near event A. These recent events are the only M 7 class earthquakes occurred off Fukushima since 1938. Except for the two 1938 events, normal fault earthquakes have not occurred until many aftershocks of the 2011 Tohoku earthquake. We compared the observed tsunami and seismic waveforms of the 1938, 2014, and 2016 earthquakes to examine the normal fault earthquakes occurred off Fukushima region. It is difficult to compare the tsunami waveforms of the 1938, 2014 and 2016 events because there were only a few observations at the same station. The teleseismic body wave inversion of the 2016 earthquake yielded with the focal mechanism of strike 42°, dip 35°, and rake -94°. Other source parameters were as follows: source area 70 km x 40 km, average slip 0.2 m, maximum slip 1.2 m, seismic moment 2.2 x 1019 Nm, and Mw 6.8. A large slip area is located near the hypocenter, and it is compatible with the tsunami source area estimated from tsunami travel times. The 2016 tsunami source area is smaller than that of the 1938 event, consistent with the difference in Mw: 7.7 for event A estimated by Abe (1977) and 6.8 for the 2016 event. Although the 2014 epicenter is very close to that of event B, the teleseismic waveforms of the 2014 event are similar to those of event A and the 2016 event. While Abe (1977) assumed that the mechanism of event B was the same as event A, the initial motions at some stations are opposite, indicating that the focal mechanisms of events A and B are different and more detailed examination is needed. The normal fault type earthquake seems to occur following the occurrence of M7 9 class thrust type earthquake at the plate boundary off Fukushima region.
NASA Astrophysics Data System (ADS)
Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni
2015-04-01
If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer non-physical arrivals. Finally we reconstruct earthquake seismograms at sensors that were previously active but were subsequently removed before the earthquakes occurred; thus we create virtual earthquake seismograms at those sensors, truly retrospectively. Such SRI seismograms can be used to create a catalogue of new, virtual earthquake seismograms that are available to complement real earthquake data in future earthquake seismology studies. [1]E. Entwistle, Curtis, A., Galetti, E., Baptie, B., Meles, G., Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales, JGR, in press.
Slip reactivation during the 2011 Tohoku earthquake: Dynamic rupture and ground motion simulations
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.
2013-12-01
The 2011 Mw9 Tohoku earthquake generated such as vast geophysical data that allows studying with an unprecedented resolution the spatial-temporal evolution of the rupture process of a mega thrust event. Joint source inversion of teleseismic, near-source strong motion and coseismic geodetic data , e.g [Lee et. al, 2011], reveal an evidence of slip reactivation process at areas of very large slip. The slip of snapshots of this source model shows that after about 40 seconds the big patch above to the hypocenter experienced an additional push of the slip (reactivation) towards the trench. These two possible repeating slip exhibited by source inversions can create two waveform envelops well distinguished in the ground motion pattern. In fact seismograms of the KiK-Net Japanese network contained this pattern. For instance a seismic station around Miyagi (MYGH10) has two main wavefronts separated between them by 40 seconds. A possible physical mechanism to explain the slip reactivation could be a thermal pressurization process occurring in the fault zone. In fact, Kanamori & Heaton, (2000) proposed that for large earthquakes frictional melting and fluid pressurization can play a key role of the rupture dynamics of giant earthquakes. If fluid exists in a fault zone, an increase of temperature can rise up the pore pressure enough to significantly reduce the frictional strength. Therefore, during a large earthquake the areas of big slip persuading strong thermal pressurization may result in a second drop of the frictional strength after reaching a certain value of slip. Following this principle, we adopt for slip weakening friction law and prescribe a certain maximum slip after which the friction coefficient linearly drops down again. The implementation of this friction law has been done in the latest unstructured spectral element code SPECFEM3D, Peter et. al. (2012). The non-planar subduction interface has been taken into account and place on it a big asperity patch inside areas of big slip (>50m) close to the trench. Within the first 2km bellow the trench a negative stress drop has been imposed in order to represent the energy absorption zone that attenuates a high frequency radiation at the shallow part of the suduction zone. At down dip, where high frequency radiation burst has been detected from back projection techniques, e.g. [Meng et. al, 2011; Ishi , 2011], small asperities has been considered in our dynamic rupture model. Finally, a comparison of static geodetic free surface displacement and synthetics has been made to obtain our best model. We additionally compare seismograms with the aim to represent the main features of the strong ground motion recorded from this earthquake. Moreover, the spatial-temporal rupture evolution detected by back projection at down dip is in a good agreement with the rupture evolution of our dynamic model.
Real time validation of GPS TEC precursor mask for Greece
NASA Astrophysics Data System (ADS)
Pulinets, Sergey; Davidenko, Dmitry
2013-04-01
It was established by earlier studies of pre-earthquake ionospheric variations that for every specific site these variations manifest definite stability in their temporal behavior within the time interval few days before the seismic shock. This self-similarity (characteristic to phenomena registered for processes observed close to critical point of the system) permits us to consider these variations as a good candidate to short-term precursor. Physical mechanism of GPS TEC variations before earthquakes is developed within the framework of Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model. Taking into account the different tectonic structure and different source mechanisms of earthquakes in different regions of the globe, every site has its individual behavior in pre-earthquake activity what creates individual "imprint" on the ionosphere behavior at every given point. Just this so called "mask" of the ionosphere variability before earthquake in the given point creates opportunity to detect anomalous behavior of electron concentration in ionosphere basing not only on statistical processing procedure but applying the pattern recognition technique what facilitates the automatic recognition of short-term ionospheric precursors of earthquakes. Such kind of precursor mask was created using the GPS TEC variation around the time of 9 earthquakes with magnitude from M6.0 till M6.9 which took place in Greece within the time interval 2006-2011. The major anomaly revealed in the relative deviation of the vertical TEC was the positive anomaly appearing at ~04PM UT one day before the seismic shock and lasting nearly 12 hours till ~04AM UT. To validate this approach it was decided to check the mask in real-time monitoring of earthquakes in Greece starting from the 1 of December 2012 for the earthquakes with magnitude more than 4.5. During this period (till 9 of January 2013) 4 cases of seismic shocks were registered, including the largest one M5.7 on 8 of January. For all of them the mask confirmed its validity and 6 of December event was predicted in advance.
NASA Astrophysics Data System (ADS)
Asano, K.; Iwata, T.
2008-12-01
The 2008 Iwate-Miyagi Nairiku earthquake (MJMA7.2) on June 14, 2008, is a thrust type inland crustal earthquake, which occurred in northeastern Honshu, Japan. In order to see strong motion generation process of this event, the source rupture process is estimated by the kinematic waveform inversion using strong motion data. Strong motion data of the K-NET and KiK-net stations and Aratozawa Dam are used. These stations are located 3-94 km from the epicenter. Original acceleration time histories are integrated into velocity and band- pass filtered between 0.05 and 1 Hz. For obtaining the detailed source rupture process, appropriate velocity structure model for Green's functions should be used. We estimated one dimensional velocity structure model for each strong motion station by waveform modeling of aftershock records. The elastic wave velocity, density, and Q-values for four sedimentary layers are assumed following previous studies. The thickness of each sedimentary layer depends on the station, which is estimated to fit the observed aftershock's waveforms by the optimization using the genetic algorithm. A uniform layered structure model is assumed for crust and upper mantle below the seismic bedrock. We succeeded to get a reasonable velocity structure model for each station to give a good fit of the main S-wave part in the observation of aftershocks. The source rupture process of the mainshock is estimated by the linear kinematic waveform inversion using multiple time windows (Hartzell and Heaton, 1983). A fault plane model is assumed following the moment tensor solution by F-net, NIED. The strike and dip angle is 209° and 51°, respectively. The rupture starting point is fixed at the hypocenter located by the JMA. The obtained source model shows a large slip area in the shallow portion of the fault plane approximately 6 km southwest of the hypocenter. The rupture of the asperity finishes within about 9 s. This large slip area corresponds to the area with surface break reported by the field survey group (e.g., AIST/GSJ, 2008), which supports the existence of the large slip close to the ground surface. But, most of surface offset found by the field survey are less than 0.5 m whereas the slip amount of the shallow asperity of the source inversion result is 3-4 m. In north of the hypocenter, the estimated slip amount is small. Slip direction is almost pure dip-slip for the entire fault (Northwest side goes up against southeast side). Total seismic moment is 2.6× 1019 Nm (MW 6.9). Acknowledgments: Strong motion data of K-NET and KiK-net operated by the National Research Institute for Earth Science and Disaster Prevention are used. Strong motion data of Aratozawa Dam obtained by Miyagi prefecture government is also used in the study.
Lisbon 1755, a multiple-rupture earthquake
NASA Astrophysics Data System (ADS)
Fonseca, J. F. B. D.
2017-12-01
The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the Lisbon earthquake. This may reflect the very long period of surface waves generated by the combined sources as a result of the delays between ruptures. Recognition of this new class of large intraplate earthquakes may pave the way to a better understanding of the mechanisms driving intraplate deformation.
NASA Astrophysics Data System (ADS)
Yolsal-Çevikbilen, Seda; Taymaz, Tuncay
2012-04-01
We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop values (Δσ < 30 bars) for all earthquakes implying typically interplate seismic activity in the region. Further, results of numerical simulations verified that damaging historical tsunamis along the Hellenic subduction zone are able to threaten especially the coastal plains of Crete and Rhodes islands, SW Turkey, Cyprus, Levantine, and Nile Delta-Egypt regions. Thus, we tentatively recommend that special care should be considered in the evaluation of the tsunami risk assessment of the Eastern Mediterranean region for future studies.
Observational constraints on earthquake source scaling: Understanding the limits in resolution
Hough, S.E.
1996-01-01
I examine the resolution of the type of stress drop estimates that have been used to place observational constraints on the scaling of earthquake source processes. I first show that apparent stress and Brune stress drop are equivalent to within a constant given any source spectral decay between ??1.5 and ??3 (i.e., any plausible value) and so consistent scaling is expected for the two estimates. I then discuss the resolution and scaling of Brune stress drop estimates, in the context of empirical Green's function results from recent earthquake sequences, including the 1992 Joshua Tree, California, mainshock and its aftershocks. I show that no definitive scaling of stress drop with moment is revealed over the moment range 1019-1025; within this sequence, however, there is a tendency for moderate-sized (M 4-5) events to be characterized by high stress drops. However, well-resolved results for recent M > 6 events are inconsistent with any extrapolated stress increase with moment for the aftershocks. Focusing on comer frequency estimates for smaller (M < 3.5) events, I show that resolution is extremely limited even after empirical Green's function deconvolutions. A fundamental limitation to resolution is the paucity of good signal-to-noise at frequencies above 60 Hz, a limitation that will affect nearly all surficial recordings of ground motion in California and many other regions. Thus, while the best available observational results support a constant stress drop for moderate-to large-sized events, very little robust observational evidence exists to constrain the quantities that bear most critically on our understanding of source processes: stress drop values and stress drop scaling for small events.
NASA Astrophysics Data System (ADS)
Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.
2016-12-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
A Bayesian analysis of the 2016 Pedernales (Ecuador) earthquake rupture process
NASA Astrophysics Data System (ADS)
Gombert, B.; Duputel, Z.; Jolivet, R.; Rivera, L. A.; Simons, M.; Jiang, J.; Liang, C.; Fielding, E. J.
2017-12-01
The 2016 Mw = 7.8 Pedernales earthquake is the largest event to strike Ecuador since 1979. Long period W-phase and Global CMT solutions suggest that slip is not perpendicular to the trench axis, in agreement with the convergence obliquity of the Ecuadorian subduction. In this study, we propose a new co-seismic kinematic slip model obtained from the joint inversion of multiple observations in an unregularized and fully Bayesian framework. We use a comprehensive static dataset composed of several InSAR scenes, GPS static offsets, and tsunami waveforms from two nearby DART stations. The kinematic component of the rupture process is constrained by an extensive network of High-Rate GPS and accelerometers. Our solution includes the ensemble of all plausible models that are consistent with our prior information and fit the available observations within data and prediction uncertainties. We analyse the source process in light of the historical seismicity, in particular the Mw = 7.8 1942 earthquake for which the rupture extent overlaps with the 2016 event. In addition, we conduct a probabilistic comparison of co-seismic slip with a stochastic interseismic coupling model obtained from GPS data, putting a light on the processes at play within the Ecuadorian subduction margin.
A phase coherence approach to estimating the spatial extent of earthquakes
NASA Astrophysics Data System (ADS)
Hawthorne, Jessica C.; Ampuero, Jean-Paul
2016-04-01
We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.
Analysis of Earthquake Source Spectra in Salton Trough
NASA Astrophysics Data System (ADS)
Chen, X.; Shearer, P. M.
2009-12-01
Previous studies of the source spectra of small earthquakes in southern California show that average Brune-type stress drops vary among different regions, with particularly low stress drops observed in the Salton Trough (Shearer et al., 2006). The Salton Trough marks the southern end of the San Andreas Fault and is prone to earthquake swarms, some of which are driven by aseismic creep events (Lohman and McGuire, 2007). In order to learn the stress state and understand the physical mechanisms of swarms and slow slip events, we analyze the source spectra of earthquakes in this region. We obtain Southern California Seismic Network (SCSN) waveforms for earthquakes from 1977 to 2009 archived at the Southern California Earthquake Center (SCEC) data center, which includes over 17,000 events. After resampling the data to a uniform 100 Hz sample rate, we compute spectra for both signal and noise windows for each seismogram, and select traces with a P-wave signal-to-noise ratio greater than 5 between 5 Hz and 15 Hz. Using selected displacement spectra, we isolate the source spectra from station terms and path effects using an empirical Green’s function approach. From the corrected source spectra, we compute corner frequencies and estimate moments and stress drops. Finally we analyze spatial and temporal variations in stress drop in the Salton Trough and compare them with studies of swarms and creep events to assess the evolution of faulting and stress in the region. References: Lohman, R. B., and J. J. McGuire (2007), Earthquake swarms driven by aseismic creep in the Salton Trough, California, J. Geophys. Res., 112, B04405, doi:10.1029/2006JB004596 Shearer, P. M., G. A. Prieto, and E. Hauksson (2006), Comprehensive analysis of earthquake source spectra in southern California, J. Geophys. Res., 111, B06303, doi:10.1029/2005JB003979.
Earthquake source properties from pseudotachylite
Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan
2016-01-01
The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow, pseudotachylite is not representative of the shear strength during the earthquake that generated it, or unless the strength excess is larger than we have allowed.
NASA Astrophysics Data System (ADS)
Gok, R.; Hutchings, L.
2004-05-01
We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
A rapid estimation of tsunami run-up based on finite fault models
NASA Astrophysics Data System (ADS)
Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.
2014-12-01
Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, R.B.; Nguyen, B.
Earthquake activity in the New Madrid Seismic Zone had been monitored by regional seismic networks since 1975. During this time period, over 3,700 earthquakes have been located within the region bounded by latitudes 35{degrees}--39{degrees}N and longitudes 87{degrees}--92{degrees}W. Most of these earthquakes occur within a 1.5{degrees} x 2{degrees} zone centered on the Missouri Bootheel. Source parameters of larger earthquakes in the zone and in eastern North America are determined using surface-wave spectral amplitudes and broadband waveforms for the purpose of determining the focal mechanism, source depth and seismic moment. Waveform modeling of broadband data is shown to be a powerful toolmore » in defining these source parameters when used complementary with regional seismic network data, and in addition, in verifying the correctness of previously published focal mechanism solutions.« less
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
NASA Astrophysics Data System (ADS)
Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit
2017-10-01
Strong-motion records of recent Gorkha Nepal earthquake ( M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor ( Q β) at recording site. The frequency-dependent Q β( f) = 124 f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Harada, Tomoya; Satake, Kenji; Ishibe, Takeo; Gusman, Aditya Riadi
2016-05-01
The July 2015 Mw 7.0 Solomon Islands tsunamigenic earthquake occurred ~40 km north of the February 2013 Mw 8.0 Santa Cruz earthquake. The proximity of the two epicenters provided unique opportunities for a comparative study of their source mechanisms and tsunami generation. The 2013 earthquake was an interplate event having a thrust focal mechanism at a depth of 30 km while the 2015 event was a normal-fault earthquake occurring at a shallow depth of 10 km in the overriding Pacific Plate. A combined use of tsunami and teleseismic data from the 2015 event revealed the north dipping fault plane and a rupture velocity of 3.6 km/s. Stress transfer analysis revealed that the 2015 earthquake occurred in a region with increased Coulomb stress following the 2013 earthquake. Spectral deconvolution, assuming the 2015 tsunami as empirical Green's function, indicated the source periods of the 2013 Santa Cruz tsunami as 10 and 22 min.
Petersen, M.D.; Dewey, J.; Hartzell, S.; Mueller, C.; Harmsen, S.; Frankel, A.D.; Rukstales, K.
2004-01-01
The ground motion hazard for Sumatra and the Malaysian peninsula is calculated in a probabilistic framework, using procedures developed for the US National Seismic Hazard Maps. We constructed regional earthquake source models and used standard published and modified attenuation equations to calculate peak ground acceleration at 2% and 10% probability of exceedance in 50 years for rock site conditions. We developed or modified earthquake catalogs and declustered these catalogs to include only independent earthquakes. The resulting catalogs were used to define four source zones that characterize earthquakes in four tectonic environments: subduction zone interface earthquakes, subduction zone deep intraslab earthquakes, strike-slip transform earthquakes, and intraplate earthquakes. The recurrence rates and sizes of historical earthquakes on known faults and across zones were also determined from this modified catalog. In addition to the source zones, our seismic source model considers two major faults that are known historically to generate large earthquakes: the Sumatran subduction zone and the Sumatran transform fault. Several published studies were used to describe earthquakes along these faults during historical and pre-historical time, as well as to identify segmentation models of faults. Peak horizontal ground accelerations were calculated using ground motion prediction relations that were developed from seismic data obtained from the crustal interplate environment, crustal intraplate environment, along the subduction zone interface, and from deep intraslab earthquakes. Most of these relations, however, have not been developed for large distances that are needed for calculating the hazard across the Malaysian peninsula, and none were developed for earthquake ground motions generated in an interplate tectonic environment that are propagated into an intraplate tectonic environment. For the interplate and intraplate crustal earthquakes, we have applied ground-motion prediction relations that are consistent with California (interplate) and India (intraplate) strong motion data that we collected for distances beyond 200 km. For the subduction zone equations, we recognized that the published relationships at large distances were not consistent with global earthquake data that we collected and modified the relations to be compatible with the global subduction zone ground motions. In this analysis, we have used alternative source and attenuation models and weighted them to account for our uncertainty in which model is most appropriate for Sumatra or for the Malaysian peninsula. The resulting peak horizontal ground accelerations for 2% probability of exceedance in 50 years range from over 100% g to about 10% g across Sumatra and generally less than 20% g across most of the Malaysian peninsula. The ground motions at 10% probability of exceedance in 50 years are typically about 60% of the ground motions derived for a hazard level at 2% probability of exceedance in 50 years. The largest contributors to hazard are from the Sumatran faults.
Towards Deep Learning from Twitter for Improved Tsunami Alerts and Advisories
NASA Astrophysics Data System (ADS)
Lumb, L. I.; Freemantle, J. R.
2017-12-01
Data from social-networking services increasingly complements that from traditional sources in scenarios that seek to 'cultivate' situational awareness. As false-positive alerts and retracted advisories appear to suggest, establishing a causal connection between earthquakes and tsunamis remains an extant challenge that could prove life-critical. Because posts regarding such natural disasters typically 'trend' in real time via social media, we extract tweets in an effort to elucidate this cause-effect relationship from a very different perspective. To extract content of potential geophysical value from a multiplicity of 140-character tweets streamed in real time, we apply Natural Language Processing (NLP) to the unstructured data and metadata available via Twitter. In Deep Learning from Twitter, words such as "earthquake" are represented as vectors embedded in a corpora of tweets, whose proximity to words such as "tsunami" can be subsequently quantified. Furthermore, when use is made of pre-trained word vectors available for various reference corpora, geophysically credible tweets are rendered distinguishable by quantifying similarities through use of a word-vector dot product. Finally, word-vector analogies are shown to be promising in terms of deconstructing the earthquake-tsunami relationship in terms of the cumulative effect of multiple, contributing factors (see figure). Because diction is anticipated to differ in tweets that follow a tsunami-producing earthquake, our emphasis here is on the re-analysis of actual event data extracted from Twitter that quantifies word sense relative to earthquake-only events. If proven viable, our approach could complement those measures already in place to deliver real-time alerts and advisories following tsunami-causing earthquakes. With climate change accelerating the frequency of glacial calving, and in so doing providing an alternate, potential source for tsunamis, our approach is anticipated to be of value in broader contexts.
Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas
NASA Astrophysics Data System (ADS)
Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing
2017-12-01
Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.
Observation of the seismic nucleation phase in the Ridgecrest, California, earthquake sequence
Ellsworth, W.L.; Beroza, G.C.
1998-01-01
Near-source observations of five M 3.8-5.2 earthquakes near Ridgecrest, California are consistent with the presence of a seismic nucleation phase. These earthquakes start abruptly, but then slow or stop before rapidly growing again toward their maximum rate of moment release. Deconvolution of instrument and path effects by empirical Green's functions demonstrates that the initial complexity at the start of the earthquake is a source effect. The rapid growth of the P-wave arrival at the start of the seismic nucleation phase supports the conclusion of Mori and Kanamori [1996] that these earthquakes begin without a magnitude-scaled slow initial phase of the type observed by Iio [1992, 1995].
The 2006 Java Earthquake revealed by the broadband seismograph network in Indonesia
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Miyakawa, K.; Yamashina, T.; Inoue, H.; Ishida, M.; Aoi, S.; Morikawa, N.; Harjadi, P.
2006-12-01
On May 27, 2006, local time, a moderate-size earthquake (Mw=6.4) occurred in central Java. This earthquake caused severe damages near Yogyakarta City, and killed more than 5700 people. To estimate the source mechanism and location of this earthquake, we performed a waveform inversion of the broadband seismograms recorded by a nationwide seismic network in Indonesia (Realtime-JISNET). Realtime-JISNET is a part of the broadband seismograph network developed by an international cooperation among Indonesia, Germany, China, and Japan, aiming at improving the capabilities to monitor seismic activity and tsunami generation in Indonesia. 12 stations in Realitme-JISNET were in operation when the earthquake occurred. We used the three-component seismograms from the two closest stations, which were located about 100 and 300 km from the source. In our analysis, we assumed pure double couple as the source mechanism, thus reducing the number of free parameters in the waveform inversion. Therefore we could stably estimate the source mechanism using the signals observed by a small number of seismic stations. We carried out a grid search with respect to strike, dip, and rake angles to investigate fault orientation and slip direction. We determined source-time functions of the moment-tensor components in the frequency domain for each set of strike, dip, and rake angles. We also conducted a spatial grid search to find the best-fit source location. The best-fit source was approximately 12 km SSE of Yogyakarta at a depth of 10 km below sea level, immediately below the area of extensive damage. The focal mechanism indicates that this earthquake was caused by compressive stress in the NS direction and strike-slip motion was dominant. The moment magnitude (Mw) was 6.4. We estimated the seismic intensity in the areas of severe damage using the source paramters and an empirical attenuation relation for averaged peak ground velocity (PGV) of horizontal seismic motion. We then calculated the instrumental modified Mercalli intensity (Imm) from the estimated PGV values. Our result indicates that strong ground motion with Imm of 7 or more occurred within 10 km of the earthquake fault, although the actual seismic intensity can be affected by shallow structural heterogeneity. We therefore conclude that the severe damages of the Java earthquake are attributed to the strong ground motion, which was primarily caused by the source located immediately below the populated areas.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Miller, John J.; von Huene, Roland E.; Ryan, Holly F.
2014-01-01
In 1946 at Unimak Pass, Alaska, a tsunami destroyed the lighthouse at Scotch Cap, Unimak Island, took 159 lives on the Hawaiian Islands, damaged island coastal facilities across the south Pacific, and destroyed a hut in Antarctica. The tsunami magnitude of 9.3 is comparable to the magnitude 9.1 tsunami that devastated the Tohoku coast of Japan in 2011. Both causative earthquake epicenters occurred in shallow reaches of the subduction zone. Contractile tectonism along the Alaska margin presumably generated the far-field tsunami by producing a seafloor elevation change. However, the Scotch Cap lighthouse was destroyed by a near-field tsunami that was probably generated by a coeval large undersea landslide, yet bathymetric surveys showed no fresh large landslide scar. We investigated this problem by reprocessing five seismic lines, presented here as high-resolution graphic images, both uninterpreted and interpreted, and available for the reader to download. In addition, the processed seismic data for each line are available for download as seismic industry-standard SEG-Y files. One line, processed through prestack depth migration, crosses a 10 × 15 kilometer and 800-meter-high hill presumed previously to be basement, but that instead is composed of stratified rock superimposed on the slope sediment. This image and multibeam bathymetry illustrate a slide block that could have sourced the 1946 near-field tsunami because it is positioned within a distance determined by the time between earthquake shaking and the tsunami arrival at Scotch Cap and is consistent with the local extent of high runup of 42 meters along the adjacent Alaskan coast. The Unimak/Scotch Cap margin is structurally similar to the 2011 Tohoku tsunamigenic margin where a large landslide at the trench, coeval with the Tohoku earthquake, has been documented. Further study can improve our understanding of tsunami sources along Alaska’s erosional margins.
Savage, J.C.; Yu, S.-B.
2007-01-01
We treat both the number of earthquakes and the deformation following a mainshock as the superposition of a steady background accumulation and the post-earthquake process. The preseismic displacement and seismicity rates ru and rE are used as estimates of the background rates. Let t be the time after the mainshock, u(t) + u0 the postseismic displacement less the background accumulation rut, and ??N(t) the observed cumulative number of postseismic earthquakes less the background accumulation rE t. For the first 160 days (duration limited by the occurrence of another nearby earthquake) following the Chengkung (M 6.5, 10 December 2003, eastern Taiwan) and the first 560 days following the Parkfield (M 6.0, 28 September 2004, central California) earthquakes u(t) + u0 is a linear function of ??N(t). The aftershock accumulation ??N(t) for both earthquakes is described by the modified Omori Law d??N/dt ?? (1 + t/??)-p with p = 0.96 and ?? = 0.03 days. Although the Chengkung earthquake involved sinistral, reverse slip on a moderately dipping fault and the Parkfield earthquake right-lateral slip on a near-vertical fault, the earthquakes share an unusual feature: both occurred on faults exhibiting interseismic fault creep at the surface. The source of the observed postseismic deformation appears to be afterslip on the coseismic rupture. The linear relation between u(t) + u0 and N(t) suggests that this afterslip also generates the aftershocks. The linear relation between u(t) + u0 and ??N(t) obtains after neither the 1999 M 7.1 Hector Mine (southern California) nor the 1999 M 7.6 Chi-Chi (central Taiwan) earthquakes, neither of which occurred on fault segments exhibiting fault creep.
The 2017 Maple Creek Seismic Swarm in Yellowstone National Park
NASA Astrophysics Data System (ADS)
Pang, G.; Hale, J. M.; Farrell, J.; Burlacu, R.; Koper, K. D.; Smith, R. B.
2017-12-01
The University of Utah Seismograph Stations (UUSS) performs near-real-time monitoring of seismicity in the region around Yellowstone National Park in partnership with the United States Geological Survey and the National Park Service. UUSS operates and maintains 29 seismic stations with network code WY (short-period, strong-motion, and broadband) and records data from five other seismic networks—IW, MB, PB, TA, and US—to enhance the location capabilities in the Yellowstone region. A seismic catalog is produced using a conventional STA/LTA detector and single-event location techniques (Hypoinverse). On June 12, 2017, a seismic swarm began in Yellowstone National Park about 5 km east of Hebgen Lake. The swarm is adjacent to the source region of the 1959 MW 7.3 Hebgen Lake earthquake, in an area corresponding to positive Coulumb stress change from that event. As of Aug. 1, 2017, the swarm consists of 1481 earthquakes with 1 earthquake above magnitude 4, 8 earthquakes in the magnitude 3 range, 115 earthquakes in the magnitude 2 range, 469 earthquakes in the magnitude 1 range, 856 earthquakes in the magnitude 0 range, 22 earthquakes with negative magnitudes, and 10 earthquakes with no magnitude. Earthquake depths are mostly between 3 and 10 km and earthquake depth increases toward the northwest. Moment tensors for the 2 largest events (3.6 MW and 4.4. MW) show strike-slip faulting with T axes oriented NE-SW, consistent with the regional stress field. We are currently using waveform cross-correlation methods to measure differential travel times that are being used with the GrowClust program to generate high-accuracy relative relocations. Those locations will be used to identify structures in the seismicity and make inferences about the tectonic and magmatic processes causing the swarm.
PAGER-CAT: A composite earthquake catalog for calibrating global fatality models
Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.
2009-01-01
We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are highly uncertain, particularly the casualty numbers, which must be regarded as estimates rather than firm numbers for many earthquakes. Consequently, we encourage contributions from the seismology and earthquake engineering communities to further improve this resource via the Wikipedia page and personal communications, for the benefit of the whole community.
DOT National Transportation Integrated Search
2016-12-01
A large magnitude long duration subduction earthquake is impending in the Pacific Northwest, which lies near the : Cascadia Subduction Zone (CSZ). Great subduction zone earthquakes are the largest earthquakes in the world and are the sole source : zo...
Microtremor Survey On PovoaÇA~o County (s. Miguel Island, Azores): Data Analysis And Interpretation
NASA Astrophysics Data System (ADS)
Teves-Costa, P.; Riedel, C.; Vales, D.; Wallenstein, N.; Borges, A.; Senos, M. L.; Gaspar, J. L.; Queiroz, G.
The seismic activity of the Azores Islands is known since the beginning of their settlement in the middle of the XV century. About 30 earthquakes produced social and economical important damages. The analysis of the damage distribution, for several earthquakes, shows systematically the existence of site effects. In order to understand the initial cause of these effects, three different zones were selected, with different geological and geomorphological characteristics, in the Povoação County, to perform a microtremor survey. Seismic data were recorded on a grid of 50 m in the three regions, using a 3-component Lennartz 1 Hz seismometer, with a sampling rate of 8 ms. The stations were deployed for 5 minutes or more to record microtremor imposed on the topmost layers by natural and anthropogenic sources. The data were processed using two different subroutine packages, in order to estimate the H/V ratio, defined according to the Nakamura methodology. However, the two processing routines gave different results, which forced us to revise all the procedures and to identify the main factors that caused it. Three portable seismic stations were installed in three fixed points, for about three months, aiming to record some earthquakes. Several small magnitude earthquakes (m < 3.0) were recorded and these data were processed in the same way as the noise data, obtaining reference H/V ratios. The interpretation of the dominant frequencies, for noise and small magnitude earthquakes, was performed taking into consideration not only the geological characteristics, but also the structural geomorphology.
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Spatial and Temporal Stress Drop Variations of the 2011 Tohoku Earthquake Sequence
NASA Astrophysics Data System (ADS)
Miyake, H.
2013-12-01
The 2011 Tohoku earthquake sequence consists of foreshocks, mainshock, aftershocks, and repeating earthquakes. To quantify spatial and temporal stress drop variations is important for understanding M9-class megathrust earthquakes. Variability and spatial and temporal pattern of stress drop is a basic information for rupture dynamics as well as useful to source modeling. As pointed in the ground motion prediction equations by Campbell and Bozorgnia [2008, Earthquake Spectra], mainshock-aftershock pairs often provide significant decrease of stress drop. We here focus strong motion records before and after the Tohoku earthquake, and analyze source spectral ratios considering azimuth- and distance dependency [Miyake et al., 2001, GRL]. Due to the limitation of station locations on land, spatial and temporal stress drop variations are estimated by adjusting shifts from the omega-squared source spectral model. The adjustment is based on the stochastic Green's function simulations of source spectra considering azimuth- and distance dependency. We assumed the same Green's functions for event pairs for each station, both the propagation path and site amplification effects are cancelled out. Precise studies of spatial and temporal stress drop variations have been performed [e.g., Allmann and Shearer, 2007, JGR], this study targets the relations between stress drop vs. progression of slow slip prior to the Tohoku earthquake by Kato et al. [2012, Science] and plate structures. Acknowledgement: This study is partly supported by ERI Joint Research (2013-B-05). We used the JMA unified earthquake catalogue and K-NET, KiK-net, and F-net data provided by NIED.
Leith, William S.; Benz, Harley M.; Herrmann, Robert B.
2011-01-01
Evaluation of seismic monitoring capabilities in the central and eastern United States for critical facilities - including nuclear powerplants - focused on specific improvements to understand better the seismic hazards in the region. The report is not an assessment of seismic safety at nuclear plants. To accomplish the evaluation and to provide suggestions for improvements using funding from the American Recovery and Reinvestment Act of 2009, the U.S. Geological Survey examined addition of new strong-motion seismic stations in areas of seismic activity and addition of new seismic stations near nuclear power-plant locations, along with integration of data from the Transportable Array of some 400 mobile seismic stations. Some 38 and 68 stations, respectively, were suggested for addition in active seismic zones and near-power-plant locations. Expansion of databases for strong-motion and other earthquake source-characterization data also was evaluated. Recognizing pragmatic limitations of station deployment, augmentation of existing deployments provides improvements in source characterization by quantification of near-source attenuation in regions where larger earthquakes are expected. That augmentation also supports systematic data collection from existing networks. The report further utilizes the application of modeling procedures and processing algorithms, with the additional stations and the improved seismic databases, to leverage the capabilities of existing and expanded seismic arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Türker, Tuğba, E-mail: tturker@ktu.edu.tr; Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr
North Anatolian Fault (NAF) is one from the most important strike-slip fault zones in the world and located among regions in the highest seismic activity. The NAFZ observed very large earthquakes from the past to present. The aim of this study; the important parameters of Gutenberg-Richter relationship (a and b values) estimated and this parameters taking into account, earthquakes were examined in the between years 1900-2015 for 10 different seismic source regions in the NAFZ. After that estimated occurrence probabilities and return periods of occurring earthquakes in fault zone in the next years, and is being assessed with Poisson methodmore » the earthquake hazard of the NAFZ. The Region 2 were observed the largest earthquakes for the only historical period and hasn’t been observed large earthquake for the instrumental period in this region. Two historical earthquakes (1766, M{sub S}=7.3 and 1897, M{sub S}=7.0) are included for Region 2 (Marmara Region) where a large earthquake is expected in the next years. The 10 different seismic source regions are determined the relationships between the cumulative number-magnitude which estimated a and b parameters with the equation of LogN=a-bM in the Gutenberg-Richter. A homogenous earthquake catalog for M{sub S} magnitude which is equal or larger than 4.0 is used for the time period between 1900 and 2015. The database of catalog used in the study has been created from International Seismological Center (ISC) and Boğazici University Kandilli observation and earthquake research institute (KOERI). The earthquake data were obtained until from 1900 to 1974 from KOERI and ISC until from 1974 to 2015 from KOERI. The probabilities of the earthquake occurring are estimated for the next 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 years in the 10 different seismic source regions. The highest earthquake occur probabilities in 10 different seismic source regions in the next years estimated that the region Tokat-Erzincan (Region 9) %99 with an earthquake occur probability for magnitude 6.5 which the return period 24.7 year, %92 with an earthquake occur probability for magnitude 7 which the return period 39.1 year, %80 with an earthquake occur probability for magnitude 7.5 which the return period 62.1 year, %64 with an earthquake occur probability for magnitude 8 which the return period 98.5 year. For the Marmara Region (Region 2) in the next 100 year estimated that %89 with an earthquake occur probability for magnitude 6 which the return period 44.9 year, %45 with an earthquake occur probability for magnitude 6.5 which the return period 87 year, %45 with an earthquake occur probability for magnitude 7 which the return period 168.6 year.« less
NASA Astrophysics Data System (ADS)
Tan, F.; Wang, G.; Chen, C.; Ge, Z.
2016-12-01
Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences runtime significantly. More detailed results in this research may help people to choose proper data, method and parameters.
Southern Mariana OBS Experiment and Preliminary Results of Passive-Source Investigations
NASA Astrophysics Data System (ADS)
Le, B. M.; Lin, J.; Yang, T.; Shiyan 3, S. P. O. R.
2017-12-01
The Southern Mariana OBS Experiment (SMOE) was one of the first seismic experiments targeting the deepest part of Earth's surface. During the Phase I experiment in December 2016, an array of OBS instruments were deployed across the Challenger Deep that recorded both active-source and passive-source data. During the Phase II experiment in December 2016-June 2017, passive-source data were recorded. We have retrieved earthquake signals and processed the waveforms from the recorded global, regional and local events, respectively, during the Phase I experiment. Most of the waveforms recorded by the OBS array have fairly good quality with discernible main phases. Rayleigh waves from many earthquakes were analyzed using the frequency-time analysis and their group velocities at different periods were obtained. The dispersion curves from different Rayleigh wave propagating paths would be valuable for inverting the structure of the subducting Pacific and overriding Philippine Sea plates. Furthermore, we applied the ambient noise cross-correlation method and retrieved high-quality coherence surface wave waveforms. With its relatively high frequencies, the surface waves can be used to study the crustal structure of the region. Together with the Phase II data, we expect that this seismic experiment will provide unprecedented constraints on the structure and geodynamic processes of the southern Mariana trench.
NASA Astrophysics Data System (ADS)
Takemura, Shunsuke; Maeda, Takuto; Furumura, Takashi; Obara, Kazushige
2016-05-01
In this study, the source location of the 30 May 2015 (Mw 7.9) deep-focus Bonin earthquake was constrained using P wave seismograms recorded across Japan. We focus on propagation characteristics of high-frequency P wave. Deep-focus intraslab earthquakes typically show spindle-shaped seismogram envelopes with peak delays of several seconds and subsequent long-duration coda waves; however, both the main shock and aftershock of the 2015 Bonin event exhibited pulse-like P wave propagations with high apparent velocities (~12.2 km/s). Such P wave propagation features were reproduced by finite-difference method simulations of seismic wave propagation in the case of slab-bottom source. The pulse-like P wave seismogram envelopes observed from the 2015 Bonin earthquake show that its source was located at the bottom of the Pacific slab at a depth of ~680 km, rather than within its middle or upper regions.
NASA Astrophysics Data System (ADS)
Zheng, Y.
2016-12-01
On November 22, 2014, the Ms6.3 Kangding earthquake ended 30 years of history of no strong earthquake at the Xianshuihe fault zone. The focal mechanism and centroid depth of the Kangding earthquake are inverted by teleseismic waveforms and regional seismograms with CAP method. The result shows that the two nodal planes of focal mechanism are 235°/82°/-173° and 144°/83°/-8° respectively, the latter nodal plane should be the ruptured fault plane with a focal depth of 9 km. The rupture process model of the Kangding earthquake is obtained by joint inversion of teleseismic data and regional seismograms. The Kangding earthquake is a bilateral earthquake, and the major rupture zone is within a depth range of 5-15 km, spanning 10 km and 12 km along dip and strike directions, and maximum slip is about 0.5m. Most seismic moment was released during the first 5 s and the magnitude is Mw6.01, smaller than the model determined by InSAR data. The discrepancy between co-seismic rupture models of the Kangding and its Ms 5.8 aftershock and the InSAR model implies significant afterslip deformation occurred in the two weeks after the mainshock. The afterslip released energy equals to an Mw5.9 earthquake and mainly concentrates in the northwest side and the shallower side to the rupture zone. The CFS accumulation near the epicenter of the 2014 Kangding earthquake is increased by the 2008 Wenchuan earthquake, implying that the Kangding earthquake could be triggered by the Wenchuan earthquake. The CFS at the northwest section of the seismic gap along the Kangding-daofu segment is increased by the Kanding earthquake, and the rupture slip of the Kangding earthquake sequence is too small to release the accumulated strain in the seismic gap. Consequently, the northwest section of the Kangding-daofu seismic gap is under high seismic hazard in the future.
Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory
NASA Astrophysics Data System (ADS)
Xu, Changyi; Sun, Wenke
2014-12-01
In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.
Pre-earthquake Magnetic Pulses
NASA Astrophysics Data System (ADS)
Scoville, J.; Heraud, J. A.; Freund, F. T.
2015-12-01
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
Research on the spatial analysis method of seismic hazard for island
NASA Astrophysics Data System (ADS)
Jia, Jing; Jiang, Jitong; Zheng, Qiuhong; Gao, Huiying
2017-05-01
Seismic hazard analysis(SHA) is a key component of earthquake disaster prevention field for island engineering, whose result could provide parameters for seismic design microscopically and also is the requisite work for the island conservation planning’s earthquake and comprehensive disaster prevention planning macroscopically, in the exploitation and construction process of both inhabited and uninhabited islands. The existing seismic hazard analysis methods are compared in their application, and their application and limitation for island is analysed. Then a specialized spatial analysis method of seismic hazard for island (SAMSHI) is given to support the further related work of earthquake disaster prevention planning, based on spatial analysis tools in GIS and fuzzy comprehensive evaluation model. The basic spatial database of SAMSHI includes faults data, historical earthquake record data, geological data and Bouguer gravity anomalies data, which are the data sources for the 11 indices of the fuzzy comprehensive evaluation model, and these indices are calculated by the spatial analysis model constructed in ArcGIS’s Model Builder platform.
USGS remote sensing coordination for the 2010 Haiti earthquake
Duda, Kenneth A.; Jones, Brenda
2011-01-01
In response to the devastating 12 January 2010, earthquake in Haiti, the US Geological Survey (USGS) provided essential coordinating services for remote sensing activities. Communication was rapidly established between the widely distributed response teams and data providers to define imaging requirements and sensor tasking opportunities. Data acquired from a variety of sources were received and archived by the USGS, and these products were subsequently distributed using the Hazards Data Distribution System (HDDS) and other mechanisms. Within six weeks after the earthquake, over 600,000 files representing 54 terabytes of data were provided to the response community. The USGS directly supported a wide variety of groups in their use of these data to characterize post-earthquake conditions and to make comparisons with pre-event imagery. The rapid and continuing response achieved was enabled by existing imaging and ground systems, and skilled personnel adept in all aspects of satellite data acquisition, processing, distribution and analysis. The information derived from image interpretation assisted senior planners and on-site teams to direct assistance where it was most needed.
Slow Earthquakes in the Microseism Frequency Band (0.1-1.0 Hz) off Kii Peninsula, Japan
NASA Astrophysics Data System (ADS)
Kaneko, Lisa; Ide, Satoshi; Nakano, Masaru
2018-03-01
It is difficult to detect the signal of slow deformation in the 0.1-1.0 Hz frequency band between tectonic tremors and very low frequency events, where microseism noise is dominant. Here we provide the first evidence of slow earthquakes in this microseism band, observed by the DONET1 ocean bottom seismometer network, after an Mw 5.8 earthquake off Kii Peninsula, Japan, on 1 April 2016. The signals in the microseism band were accompanied by signals from active tremors, very low frequency events, and slow slip events that radiated from the shallow plate interface. We report the detection and locations of events across five frequency bands, including the microseism band. The locations and timing of the events estimated in the different frequency bands are similar, suggesting that these signals radiated from a common source. The observed variations in detectability for each band highlight the complexity of the slow earthquake process.
The isolated ˜680 km deep 30 May 2015 MW 7.9 Ogasawara (Bonin) Islands earthquake
NASA Astrophysics Data System (ADS)
Ye, Lingling; Lay, Thorne; Zhan, Zhongwen; Kanamori, Hiroo; Hao, Jin-Lai
2016-01-01
Deep-focus earthquakes, located in very high-pressure conditions 300 to 700 km below the Earth's surface within sinking slabs of relatively cold oceanic lithosphere, are mysterious phenomena. The largest recorded deep-focus earthquake (MW 7.9) in the Izu-Bonin slab struck on 30 May 2015 beneath the Ogasawara (Bonin) Islands, isolated from prior seismicity by over 100 km in depth, and followed by only a few small aftershocks. Globally, this is the deepest (680 km centroid depth) event with MW ≥ 7.8 in the seismological record. Seismicity indicates along-strike contortion of the Izu-Bonin slab, with horizontal flattening near a depth of 550 km in the Izu region and rapid steepening to near-vertical toward the south above the location of the 2015 event. This event was exceptionally well-recorded by seismic stations around the world, allowing detailed constraints to be placed on the source process. Analyses of a large global data set of P, SH and pP seismic phases using short-period back-projection, subevent directivity, and broadband finite-fault inversion indicate that the mainshock ruptured a shallowly-dipping fault plane with patchy slip that spread over a distance of ∼40 km with a multi-stage expansion rate (∼ 5 + km /s down-dip initially, ∼3 km/s up-dip later). During the 17 s total rupture duration the radiated energy was ∼ 3.3 ×1016 J and the stress drop was ∼38 MPa. The radiation efficiency is moderate (0.34), intermediate to that of the 1994 Bolivia and 2013 Sea of Okhotsk MW 8.3 deep earthquakes, indicating that source processes of very large deep earthquakes sample a wide range of behavior from dissipative, more viscous failure to very brittle failure. The isolated occurrence of the event, much deeper than the apparently thermally-bounded distribution of Bonin-slab seismicity above 600 km depth, suggests that localized stress concentration associated with the pronounced deformation of the Izu-Bonin slab and proximity to the 660-km phase transition likely played a dominant role in generating this major earthquake.
Analysis of ground-motion simulation big data
NASA Astrophysics Data System (ADS)
Maeda, T.; Fujiwara, H.
2016-12-01
We developed a parallel distributed processing system which applies a big data analysis to the large-scale ground motion simulation data. The system uses ground-motion index values and earthquake scenario parameters as input. We used peak ground velocity value and velocity response spectra as the ground-motion index. The ground-motion index values are calculated from our simulation data. We used simulated long-period ground motion waveforms at about 80,000 meshes calculated by a three dimensional finite difference method based on 369 earthquake scenarios of a great earthquake in the Nankai Trough. These scenarios were constructed by considering the uncertainty of source model parameters such as source area, rupture starting point, asperity location, rupture velocity, fmax and slip function. We used these parameters as the earthquake scenario parameter. The system firstly carries out the clustering of the earthquake scenario in each mesh by the k-means method. The number of clusters is determined in advance using a hierarchical clustering by the Ward's method. The scenario clustering results are converted to the 1-D feature vector. The dimension of the feature vector is the number of scenario combination. If two scenarios belong to the same cluster the component of the feature vector is 1, and otherwise the component is 0. The feature vector shows a `response' of mesh to the assumed earthquake scenario group. Next, the system performs the clustering of the mesh by k-means method using the feature vector of each mesh previously obtained. Here the number of clusters is arbitrarily given. The clustering of scenarios and meshes are performed by parallel distributed processing with Hadoop and Spark, respectively. In this study, we divided the meshes into 20 clusters. The meshes in each cluster are geometrically concentrated. Thus this system can extract regions, in which the meshes have similar `response', as clusters. For each cluster, it is possible to determine particular scenario parameters which characterize the cluster. In other word, by utilizing this system, we can obtain critical scenario parameters of the ground-motion simulation for each evaluation point objectively. This research was supported by CREST, JST.
NASA Astrophysics Data System (ADS)
Power, William; Clark, Kate; King, Darren N.; Borrero, Jose; Howarth, Jamie; Lane, Emily M.; Goring, Derek; Goff, James; Chagué-Goff, Catherine; Williams, James; Reid, Catherine; Whittaker, Colin; Mueller, Christof; Williams, Shaun; Hughes, Matthew W.; Hoyle, Jo; Bind, Jochen; Strong, Delia; Litchfield, Nicola; Benson, Adrian
2017-07-01
The 2016 M w 7.8 Kaikōura earthquake was one of the largest earthquakes in New Zealand's historical record, and it generated the most significant local source tsunami to affect New Zealand since 1947. There are many unusual features of this earthquake from a tsunami perspective: the epicentre was well inland of the coast, multiple faults were involved in the rupture, and the greatest tsunami damage to residential property was far from the source. In this paper, we summarise the tectonic setting and the historical and geological evidence for past tsunamis on this coast, then present tsunami tide gauge and runup field observations of the tsunami that followed the Kaikōura earthquake. For the size of the tsunami, as inferred from the measured heights, the impact of this event was relatively modest, and we discuss the reasons for this which include: the state of the tide at the time of the earthquake, the degree of co-seismic uplift, and the nature of the coastal environment in the tsunami source region.
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
Earthquake Hoax in Ghana: Exploration of the Cry Wolf Hypothesis
Aikins, Moses; Binka, Fred
2012-01-01
This paper investigated the belief of the news of impending earthquake from any source in the context of the Cry Wolf hypothesis as well as the belief of the news of any other imminent disaster from any source. We were also interested in the correlation between preparedness, risk perception and antecedents. This explorative study consisted of interviews, literature and Internet reviews. Sampling was of a simple random nature. Stratification was carried out by sex and residence type. The sample size of (N=400), consisted of 195 males and 205 Females. Further stratification was based on residential classification used by the municipalities. The study revealed that a person would believe news of an impending earthquake from any source, (64.4%) and a model significance of (P=0.000). It also showed that a person would believe news of any other impending disaster from any source, (73.1%) and a significance of (P=0.003). There is association between background, risk perception and preparedness. Emergency preparedness is weak. Earthquake awareness needs to be re-enforced. There is a critical need for public education of earthquake preparedness. The authors recommend developing emergency response program for earthquakes, standard operating procedures for a national risk communication through all media including instant bulk messaging. PMID:28299086
GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling
NASA Astrophysics Data System (ADS)
Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.
2017-12-01
On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)
Source Rupture Process of the 2016 Kumamoto, Japan, Earthquake Inverted from Strong-Motion Records
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Zheng, Ao
2017-04-01
On 15 April, 2016 the great earthquake with magnitude Mw7.1 occurred in Kumamoto prefecture, Japan. The focal mechanism solution released by F-net located the hypocenter at 130.7630°E, 32.7545°N, at a depth of 12.45 km, and the strike, dip, and the rake angle of the fault were N226°E, 84˚ and -142° respectively. The epicenter distribution and focal mechanisms of aftershocks implied the mechanism of the mainshock might have changed in the source rupture process, thus a single focal mechanism was not enough to explain the observed data adequately. In this study, based on the inversion result of GNSS and InSAR surface deformation with active structures for reference, we construct a finite fault model with focal mechanism changes, and derive the source rupture process by multi-time-window linear waveform inversion method using the strong-motion data (0.05 1.0Hz) obtained by K-NET and KiK-net of Japan. Our result shows that the Kumamoto earthquake is a right-lateral strike slipping rupture event along the Futagawa-Hinagu fault zone, and the seismogenic fault is divided into a northern segment and a southern one. The strike and the dip of the northern segment are N235°E, 60˚ respectively. And for the southern one, they are N205°E, 72˚ respectively. The depth range of the fault model is consistent with the depth distribution of aftershocks, and the slip on the fault plane mainly concentrate on the northern segment, in which the maximum slip is about 7.9 meter. The rupture process of the whole fault continues for approximately 18-sec, and the total seismic moment released is 5.47×1019N·m (Mw 7.1). In addition, the essential feature of the distribution of PGV and PGA synthesized by the inversion result is similar to that of observed PGA and seismic intensity.
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.
2014-12-01
The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of up-lift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge up-lift, we ran tsunami simulations with the slip reactivation model using sam(oa)2 (O. Meister et al., 2012), a state-of-the-art Finite-Volume framework to simulate the resulting tsunami waves.
Barkan, R.; ten Brink, Uri S.; Lin, J.
2009-01-01
The great Lisbon earthquake of November 1st, 1755 with an estimated moment magnitude of 8.5-9.0 was the most destructive earthquake in European history. The associated tsunami run-up was reported to have reached 5-15??m along the Portuguese and Moroccan coasts and the run-up was significant at the Azores and Madeira Island. Run-up reports from a trans-oceanic tsunami were documented in the Caribbean, Brazil and Newfoundland (Canada). No reports were documented along the U.S. East Coast. Many attempts have been made to characterize the 1755 Lisbon earthquake source using geophysical surveys and modeling the near-field earthquake intensity and tsunami effects. Studying far field effects, as presented in this paper, is advantageous in establishing constraints on source location and strike orientation because trans-oceanic tsunamis are less influenced by near source bathymetry and are unaffected by triggered submarine landslides at the source. Source location, fault orientation and bathymetry are the main elements governing transatlantic tsunami propagation to sites along the U.S. East Coast, much more than distance from the source and continental shelf width. Results of our far and near-field tsunami simulations based on relative amplitude comparison limit the earthquake source area to a region located south of the Gorringe Bank in the center of the Horseshoe Plain. This is in contrast with previously suggested sources such as Marqu??s de Pombal Fault, and Gulf of C??diz Fault, which are farther east of the Horseshoe Plain. The earthquake was likely to be a thrust event on a fault striking ~ 345?? and dipping to the ENE as opposed to the suggested earthquake source of the Gorringe Bank Fault, which trends NE-SW. Gorringe Bank, the Madeira-Tore Rise (MTR), and the Azores appear to have acted as topographic scatterers for tsunami energy, shielding most of the U.S. East Coast from the 1755 Lisbon tsunami. Additional simulations to assess tsunami hazard to the U.S. East Coast from possible future earthquakes along the Azores-Iberia plate boundary indicate that sources west of the MTR and in the Gulf of Cadiz may affect the southeastern coast of the U.S. The Azores-Iberia plate boundary west of the MTR is characterized by strike-slip faults, not thrusts, but the Gulf of Cadiz may have thrust faults. Southern Florida seems to be at risk from sources located east of MTR and South of the Gorringe Bank, but it is mostly shielded by the Bahamas. Higher resolution near-shore bathymetry along the U.S. East Coast and the Caribbean as well as a detailed study of potential tsunami sources in the central west part of the Horseshoe Plain are necessary to verify our simulation results. ?? 2008 Elsevier B.V.
Swarms of repeating stick-slip icequakes triggered by snow loading at Mount Rainier volcano
NASA Astrophysics Data System (ADS)
Allstadt, Kate; Malone, Stephen D.
2014-05-01
We have detected over 150,000 small (M < 1) low-frequency ( 1-5 Hz) repeating earthquakes over the past decade at Mount Rainier volcano, most of which were previously undetected. They are located high (>3000 m) on the glacier-covered edifice and occur primarily in weeklong to monthlong swarms composed of simultaneous distinct families of events. Each family contains up to thousands of earthquakes repeating at regular intervals as often as every few minutes. Mixed polarity first motions, a linear relationship between recurrence interval and event size, and strong correlation between swarm activity and snowfall suggest the source is stick-slip basal sliding of glaciers. The sudden added weight of snow during winter storms triggers a temporary change from smooth aseismic sliding to seismic stick-slip sliding in locations where basal conditions are favorable to frictional instability. Coda wave interferometry shows that source locations migrate over time at glacial speeds, starting out fast and slowing down over time, indicating a sudden increase in sliding velocity triggers the transition to stick-slip sliding. We propose a hypothesis that this increase is caused by the redistribution of basal fluids rather than direct loading because of a 1-2 day lag between snow loading and earthquake activity. This behavior is specific to winter months because it requires the inefficient drainage of a distributed subglacial drainage system. Identification of the source of these frequent signals offers a view of basal glacier processes, discriminates against alarming volcanic noises, documents short-term effects of weather on the cryosphere, and has implications for repeating earthquakes, in general.
The 2011 Hawthorne, Nevada, Earthquake Sequence; Shallow Normal Faulting
NASA Astrophysics Data System (ADS)
Smith, K. D.; Johnson, C.; Davies, J. A.; Agbaje, T.; Knezevic Antonijevic, S.; Kent, G.
2011-12-01
An energetic sequence of shallow earthquakes that began in early March 2011 in western Nevada, near the community of Hawthorne, has slowly decreased in intensity through mid-2011. To date about 1300 reviewed earthquake locations have been compiled; we have computed moment tensors for the larger earthquakes and have developed a set of high-precision locations for all reviewed events. The sequence to date has included over 50 earthquakes ML 3 and larger with the largest at Mw 4.6. Three 6-channel portable stations configured with broadband sensors and accelerometers were installed by April 20. Data from the portable instruments is telemetered through NSL's microwave backbone to Reno where it is integrated with regional network data for real-time notifications, ShakeMaps, and routine event analysis. The data is provided in real-time to NEIC, CISN and the IRIS DMC. The sequence is located in a remote area about 15-20 km southwest of Hawthorne in the footwall block of the Wassuk Range fault system. An initial concern was that the sequence might be associated with volcanic processes due to the proximity of late Quaternary volcanic flows; there have been no volcanic signatures observed in near source seismograms. An additional concern, as the sequence has proceeded, was a clear progression eastward toward the Wassuk Range front fault. The east dipping range bounding fault is capable of M 7+ events, and poses a significant hazard to the community of Hawthorne and local military facilities. The Hawthorne Army Depot is an ordinance storage facility and the nation's storage site for surplus mercury. The sequence is within what has been termed the 'Mina Deflection' of the Central Walker Lane Belt. Faulting along the Whiskey Flat section of the Wassuk front fault would be primarily down-to-the-east, with an E-W extension direction; moment tensors for the 2011 earthquake show a range of extension directions from E-W to NW-SE, suggesting a possible dextral component to the Wassuk Range front fault at this latitude. At least two faults have been imaged within the sequence; these structures are at shallow depth (3-6 km), strike NE, and dip ~NW. Prior to temporary station installation event depths were poorly constrained, with the nearest network station 25 km from the source area. Early sequence moment tensor solutions show depths are on the order of 2-6 km and locations using the near source stations also confirm the shallow depths of the Hawthorne sequence. S-P times of 0.5 sec and less have been observed on a near-source station, illustrating extremely shallow source depths for some events. Along with the 2011 Hawthorne activity, very shallow depths in Nevada have been observed from near source stations in the 2008 west Reno earthquake sequence (primarily strike-slip faulting; main shock Mw 5.0) and the 1993 Rock Valley sequence in southern NNSS (strike-slip faulting; main shock Mw 4.0). These shallow sequences tend to include high rates of low magnitude earthquakes continuing over several months duration.
Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data
NASA Astrophysics Data System (ADS)
Funning, G. J.; Cockett, R.
2012-12-01
InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median models. We envisage that the ensemble of contributed models will be useful both as a research resource and in the classroom. Locations of earthquakes derived from InSAR data have already been demonstrated to differ significantly from those obtained from global seismic networks (Weston et al., 2011), and the locations obtained by our users will enable us to identify systematic mislocations that are likely due to errors in Earth velocity models used to locate earthquakes. If the tool is incorporated into geophysics, tectonics and/or structural geology classes, in addition to familiarizing students with InSAR and elastic deformation modeling, the spread of different results for each individual earthquake will allow the teaching of concepts such as model uncertainty and non-uniqueness when modeling real scientific data. Additionally, the process students go through to optimize their estimates of fault parameters can easily be tied into teaching about the concepts of forward and inverse problems, which are common in geophysics.
NASA Astrophysics Data System (ADS)
Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.
2017-12-01
We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the simulated ground motions will be validated by comparison of simulated response spectra with recorded response spectra and with response spectra from ground motion prediction models. This research is sponsored by the Japan Nuclear Regulation Authority.
A comparison study of 2006 Java earthquake and other Tsunami earthquakes
NASA Astrophysics Data System (ADS)
Ji, C.; Shao, G.
2006-12-01
We revise the slip processes of July 17 2006 Java earthquakes by combined inverting teleseismic body wave, long period surface waves, as well as the broadband records at Christmas island (XMIS), which is 220 km away from the hypocenter and so far the closest observation for a Tsunami earthquake. Comparing with the previous studies, our approach considers the amplitude variations of surface waves with source depths as well as the contribution of ScS phase, which usually has amplitudes compatible with that of direct S phase for such low angle thrust earthquakes. The fault dip angles are also refined using the Love waves observed along fault strike direction. Our results indicate that the 2006 event initiated at a depth around 12 km and unilaterally rupture southeast for 150 sec with a speed of 1.0 km/sec. The revised fault dip is only about 6 degrees, smaller than the Harvard CMT (10.5 degrees) but consistent with that of 1994 Java earthquake. The smaller fault dip results in a larger moment magnitude (Mw=7.9) for a PREM earth, though it is dependent on the velocity structure used. After verified with 3D SEM forward simulation, we compare the inverted result with the revised slip models of 1994 Java and 1992 Nicaragua earthquakes derived using the same wavelet based finite fault inversion methodology.
NASA Astrophysics Data System (ADS)
Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian
2018-01-01
We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.
NASA Astrophysics Data System (ADS)
Dziewonski, A. M.; Chou, T.-A.; Woodhouse, J. H.
1981-04-01
It is possible to use the waveform data not only to derive the source mechanism of an earthquake but also to establish the hypocentral coordinates of the `best point source' (the centroid of the stress glut density) at a given frequency. Thus two classical problems of seismology are combined into a single procedure. Given an estimate of the origin time, epicentral coordinates and depth, an initial moment tensor is derived using one of the variations of the method described in detail by Gilbert and Dziewonski (1975). This set of parameters represents the starting values for an iterative procedure in which perturbations to the elements of the moment tensor are found simultaneously with changes in the hypocentral parameters. In general, the method is stable, and convergence rapid. Although the approach is a general one, we present it here in the context of the analysis of long-period body wave data recorded by the instruments of the SRO and ASRO digital network. It appears that the upper magnitude limit of earthquakes that can be processed using this particular approach is between 7.5 and 8.0; the lower limit is, at this time, approximately 5.5, but it could be extended by broadening the passband of the analysis to include energy with periods shorter that 45 s. As there are hundreds of earthquakes each year with magnitudes exceeding 5.5, the seismic source mechanism can now be studied in detail not only for major events but also, for example, for aftershock series. We have investigated the foreshock and several aftershocks of the Sumba earthquake of August 19, 1977; the results show temporal variation of the stress regime in the fault area of the main shock. An area some 150 km to the northwest of the epicenter of the main event became seismically active 49 days later. The sense of the strike-slip mechanism of these events is consistent with the relaxation of the compressive stress in the plate north of the Java trench. Another geophysically interesting result of our analysis is that for 5 out of 11 earthquakes of intermediate and great depth the intermediate principal value of the moment tensor is significant, while for the remaining 6 it is essentially zero, which means that their mechanisms are consistent with a simple double-couple representation. There is clear distinction between these two groups of earthquakes.
Compiling an earthquake catalogue for the Arabian Plate, Western Asia
NASA Astrophysics Data System (ADS)
Deif, Ahmed; Al-Shijbi, Yousuf; El-Hussain, Issa; Ezzelarab, Mohamed; Mohamed, Adel M. E.
2017-10-01
The Arabian Plate is surrounded by regions of relatively high seismicity. Accounting for this seismicity is of great importance for seismic hazard and risk assessments, seismic zoning, and land use. In this study, a homogenous earthquake catalogue of moment-magnitude (Mw) for the Arabian Plate is provided. The comprehensive and homogenous earthquake catalogue provided in the current study spatially involves the entire Arabian Peninsula and neighboring areas, covering all earthquake sources that can generate substantial hazard for the Arabian Plate mainland. The catalogue extends in time from 19 to 2015 with a total number of 13,156 events, of which 497 are historical events. Four polygons covering the entire Arabian Plate were delineated and different data sources including special studies, local, regional and international catalogues were used to prepare the earthquake catalogue. Moment magnitudes (Mw) that provided by original sources were given the highest magnitude type priority and introduced to the catalogues with their references. Earthquakes with magnitude differ from Mw were converted into this scale applying empirical relationships derived in the current or in previous studies. The four polygons catalogues were included in two comprehensive earthquake catalogues constituting the historical and instrumental periods. Duplicate events were identified and discarded from the current catalogue. The present earthquake catalogue was declustered in order to contain only independent events and investigated for the completeness with time of different magnitude spans.
Impact of the 2008 Wenchuan earthquake on river organic carbon provenance: Insight from biomarkers
NASA Astrophysics Data System (ADS)
Wang, Jin; Feng, Xiaojuan; Hilton, Robert; Jin, Zhangdong; Ma, Tian; Zhang, Fei; Li, Gen; Densmore, Alexander; West, A. Joshua
2017-04-01
Large earthquakes can trigger widespread landslides in active mountain belts, which can mobilize biospheric organic carbon (OC) from the soil and vegetation. Rivers can erode and export biospheric particulate organic carbon (POC), which is an export of ecosystem productivity and may result in a CO2 sink if buried in sedimentary deposits. Our previous work showed that the 2008 Mw 7.9 Wenchuan earthquake increased the discharge of biospheric OC by rivers, due to the increased supply by earthquake triggered landslides (Wang et al., 2016). However, while the OC derived from sedimentary rocks could be accounted for, the source of biospheric OC in rivers before and after the earthquake remains poorly constrained. Here we use suspended sediment samples collected from the Zagunao River before and after the Wenchuan earthquake and measured the specific compounds of OC, including fatty acids, lignin phenols and glycerol dialkyl glycerol tetraether (GDGT) lipids. In combination with the analysis of bulk elemental concentration (C and N) and carbon isotopic ratio, the new data shows differential export patterns for OC components derived from varied terrestrial sources. A high frequency sampling enabled us to explore how the biospheric OC source changes following the earthquake, helping to better understand the link between active tectonics and the carbon cycle. Our results are also important in revealing how sedimentary biomarker records may record past earthquakes.
What Can Sounds Tell Us About Earthquake Interactions?
NASA Astrophysics Data System (ADS)
Aiken, C.; Peng, Z.
2012-12-01
It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these interactions can tell us about the source characteristics of earthquakes and tremors.
NASA Astrophysics Data System (ADS)
Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.
2016-12-01
For the forthcoming Nankai earthquake with M8 to M9 class, the Earthquake Research Committee(ERC)/Headquarters for Earthquake Research Promotion, Japanese government (2013) showed 15 examples of earthquake source areas (ESAs) as possible combinations of 18 sub-regions (6 segments along trough and 3 segments normal to trough) and assessed the occurrence probability within the next 30 years (from Jan. 1, 2013) was 60% to 70%. Hirata et al.(2015, AGU) presented Probabilistic Tsunami Hazard Assessment (PTHA) along Nankai Trough in the case where diversity of the next event's ESA is modeled by only the 15 ESAs. In this study, we newly set 70 ESAs in addition of the previous 15 ESAs so that total of 85 ESAs are considered. By producing tens of faults models, with various slip distribution patterns, for each of 85 ESAs, we obtain 2500 fault models in addition of previous 1400 fault models so that total of 3900 fault models are considered to model the diversity of the next Nankai earthquake rupture (Toyama et al.,2015, JpGU). For PTHA, the occurrence probability of the next Nankai earthquake is distributed to possible 3900 fault models in the viewpoint of similarity to the 15 ESAs' extents (Abe et al.,2015, JpGU). A major concept of the occurrence probability distribution is; (i) earthquakes rupturing on any of 15 ESAs that ERC(2013) showed most likely occur, (ii) earthquakes rupturing on any of ESAs whose along-trench extent is the same as any of 15 ESAs but trough-normal extent differs from it second likely occur, (iii) earthquakes rupturing on any of ESAs whose both of along-trough and trough-normal extents differ from any of 15 ESAs rarely occur. Procedures for tsunami simulation and probabilistic tsunami hazard synthesis are the same as Hirata et al (2015). A tsunami hazard map, synthesized under an assumption that the Nankai earthquakes can be modeled as a renewal process based on BPT distribution with a mean recurrence interval of 88.2 years (ERC, 2013) and an aperiodicity of 0.22, as the median of the values (0.20 to 0.24)that ERC (2013) recommended, suggests that several coastal segments along the southwest coast of Shikoku Island, the southeast coast of Kii Peninsula, and the west coast of Izu Peninsula show over 26 % in exceedance probability that maximum water rise exceeds 10 meters at any coastal point within the next 30 years.
Studies of earthquakes and microearthquakes using near-field seismic and geodetic observations
NASA Astrophysics Data System (ADS)
O'Toole, Thomas Bartholomew
The Centroid-Moment Tensor (CMT) method allows an optimal point-source description of an earthquake to be recovered from a set of seismic observations, and, for over 30 years, has been routinely applied to determine the location and source mechanism of teleseismically recorded earthquakes. The CMT approach is, however, entirely general: any measurements of seismic displacement fields could, in theory, be used within the CMT inversion formulation, so long as the treatment of the earthquake as a point source is valid for that data. We modify the CMT algorithm to enable a variety of near-field seismic observables to be inverted for the source parameters of an earthquake. The first two data types that we implement are provided by Global Positioning System receivers operating at sampling frequencies of 1,Hz and above. When deployed in the seismic near field, these instruments may be used as long-period-strong-motion seismometers, recording displacement time series that include the static offset. We show that both the displacement waveforms, and static displacements alone, can be used to obtain CMT solutions for moderate-magnitude earthquakes, and that performing analyses using these data may be useful for earthquake early warning. We also investigate using waveform recordings - made by conventional seismometers deployed at the surface, or by geophone arrays placed in boreholes - to determine CMT solutions, and their uncertainties, for microearthquakes induced by hydraulic fracturing. A similar waveform inversion approach could be applied in many other settings where induced seismicity and microseismicity occurs..
NASA Astrophysics Data System (ADS)
Badawy, Ahmed; Horváth, Frank; Tóth, László
2001-01-01
From January 1995 to December 1997, about 74 earthquakes were located in the Pannonian basin and digitally recorded by a recently established network of seismological stations in Hungary. On reviewing the notable events, about 12 earthquakes were reported as felt with maximum intensity varying between 4 and 6 MSK. The dynamic source parameters of these earthquakes have been derived from P-wave displacement spectra. The displacement source spectra obtained are characterised by relatively small values of corner frequency ( f0) ranging between 2.5 and 10 Hz. The seismic moments change from 1.48×10 20 to 1.3×10 23 dyne cm, stress drops from 0.25 to 76.75 bar, fault length from 0.42 to 1.7 km and relative displacement from 0.05 to 15.35 cm. The estimated source parameters suggest a good agreement with the scaling law for small earthquakes. The small values of stress drops in the studied earthquakes can be attributed to the low strength of crustal materials in the Pannonian basin. However, the values of stress drops are not different for earthquake with thrust or normal faulting focal mechanism solutions. It can be speculated that an increase of the seismic activity in the Pannonian basin can be predicted in the long run because extensional development ceased and structural inversion is in progress. Seismic hazard assessment is a delicate job due to the inadequate knowledge of the seismo-active faults, particularly in the interior part of the Pannonian basin.
NASA Astrophysics Data System (ADS)
Zha, X.; Dai, Z.; Lu, Z.
2015-12-01
The 2011 Hawthorne earthquake swarm occurred in the central Walker Lane zone, neighboring the border between California and Nevada. The swarm included an Mw 4.4 on April 13, Mw 4.6 on April 17, and Mw 3.9 on April 27. Due to the lack of the near-field seismic instrument, it is difficult to get the accurate source information from the seismic data for these moderate-magnitude events. ENVISAT InSAR observations captured the deformation mainly caused by three events during the 2011 Hawthorne earthquake swarm. The surface traces of three seismogenic sources could be identified according to the local topography and interferogram phase discontinuities. The epicenters could be determined using the interferograms and the relocated earthquake distribution. An apparent earthquake migration is revealed by InSAR observations and the earthquake distribution. Analysis and modeling of InSAR data show that three moderate magnitude earthquakes were produced by slip on three previously unrecognized faults in the central Walker Lane. Two seismogenic sources are northwest striking, right-lateral strike-slip faults with some thrust-slip components, and the other source is a northeast striking, thrust-slip fault with some strike-slip components. The former two faults are roughly parallel to each other, and almost perpendicular to the latter one. This special spatial correlation between three seismogenic faults and nature of seismogenic faults suggest the central Walker Lane has been undergoing southeast-northwest horizontal compressive deformation, consistent with the region crustal movement revealed by GPS measurement. The Coulomb failure stresses on the fault planes were calculated using the preferred slip model and the Coulomb 3.4 software package. For the Mw4.6 earthquake, the Coulomb stress change caused by the Mw4.4 event increased by ~0.1 bar. For the Mw3.9 event, the Coulomb stress change caused by the Mw4.6 earthquake increased by ~1.0 bar. This indicates that the preceding earthquake may trigger the subsequence one. Because no abnormal volcano activity was observed during the 2011 Hawthorne earthquake swarm, we can rule out the volcano activity to induce these events. However, the groundwater change and mining in the epicentral zone may contribute to the 2011 Hawthorne earthquake.
Global variations of large megathrust earthquake rupture characteristics
Kanamori, Hiroo
2018-01-01
Despite the surge of great earthquakes along subduction zones over the last decade and advances in observations and analysis techniques, it remains unclear whether earthquake complexity is primarily controlled by persistent fault properties or by dynamics of the failure process. We introduce the radiated energy enhancement factor (REEF), given by the ratio of an event’s directly measured radiated energy to the calculated minimum radiated energy for a source with the same seismic moment and duration, to quantify the rupture complexity. The REEF measurements for 119 large [moment magnitude (Mw) 7.0 to 9.2] megathrust earthquakes distributed globally show marked systematic regional patterns, suggesting that the rupture complexity is strongly influenced by persistent geological factors. We characterize this as the existence of smooth and rough rupture patches with varying interpatch separation, along with failure dynamics producing triggering interactions that augment the regional influences on large events. We present an improved asperity scenario incorporating both effects and categorize global subduction zones and great earthquakes based on their REEF values and slip patterns. Giant earthquakes rupturing over several hundred kilometers can occur in regions with low-REEF patches and small interpatch spacing, such as for the 1960 Chile, 1964 Alaska, and 2011 Tohoku earthquakes, or in regions with high-REEF patches and large interpatch spacing as in the case for the 2004 Sumatra and 1906 Ecuador-Colombia earthquakes. Thus, combining seismic magnitude Mw and REEF, we provide a quantitative framework to better represent the span of rupture characteristics of great earthquakes and to understand global seismicity. PMID:29750186
Site Response for Micro-Zonation from Small Earthquakes
NASA Astrophysics Data System (ADS)
Gospe, T. B.; Hutchings, L.; Liou, I. Y. W.; Jarpe, S.
2017-12-01
We have developed a method to obtain absolute geologic site response from small earthquakes using inexpensive instrumentation that enables us to perform micro-zonation inexpensively and in a short amount of time. We record small earthquakes (M<3) at several sites simultaneously and perform inversion to obtain actual absolute site response. The key to the inversion is that recordings at several stations from an earthquake have the same moment, source corner frequency and whole path Q effect on their spectra, but have individual Kappa and spectral amplification as a function of frequency. When these source and path effects are removed and corrections for different propagation distances are performed, we are left with actual site response. We develop site response functions from 0.5 to 25.0 Hz. Cities situated near active and dangerous faults experience small earthquakes on a regular basis. We typically record at least ten small earthquakes over time to stabilize the uncertainly. Of course, dynamic soil modeling is necessary to scale our linear site response to non-linear regime for large earthquakes. Our instrumentation is very inexpensive and virtually disposable, and can be placed throughout a city at a high density. Operation only requires turning on a switch, and data processing is automated to minimize human labor. We have installed a test network and implemented our full methodology in upper Napa Valley, California where there is variable geology and nearby rock outcrop sites, and a supply of small earthquakes from the nearby Geysers development area. We test several methbods of obtaining site response. We found that rock sites have a site response of their own and distort the site response estimate based upon spectral ratios with soil sites. Also, rock sites may not even be available near all sites throughout a city. Further, H/V site response estimates from earthquakes are marginally better, but vertical motion also has a site response of its own. H/V spectral ratios of noise don't provide accurate site response estimates either. Vs30 only provides one amplification number and doesn't account for the variable three-dimensional structure beneath sites. We conclude that absolute site response obtained directly from earthquakes is the best, and possibly, the only way to get accurate site response estimates.
The music of earthquakes and Earthquake Quartet #1
Michael, Andrew J.
2013-01-01
Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.
Probabilistic seismic hazard zonation for the Cuban building code update
NASA Astrophysics Data System (ADS)
Garcia, J.; Llanes-Buron, C.
2013-05-01
A probabilistic seismic hazard assessment has been performed in response to a revision and update of the Cuban building code (NC-46-99) for earthquake-resistant building construction. The hazard assessment have been done according to the standard probabilistic approach (Cornell, 1968) and importing the procedures adopted by other nations dealing with the problem of revising and updating theirs national building codes. Problems of earthquake catalogue treatment, attenuation of peak and spectral ground acceleration, as well as seismic source definition have been rigorously analyzed and a logic-tree approach was used to represent the inevitable uncertainties encountered through the whole seismic hazard estimation process. The seismic zonation proposed here, is formed by a map where it is reflected the behaviour of the spectral acceleration values for short (0.2 seconds) and large (1.0 seconds) periods on rock conditions with a 1642 -year return period, which being considered as maximum credible earthquake (ASCE 07-05). In addition, other three design levels are proposed (severe earthquake: with a 808 -year return period, ordinary earthquake: with a 475 -year return period and minimum earthquake: with a 225 -year return period). The seismic zonation proposed here fulfils the international standards (IBC-ICC) as well as the world tendencies in this thematic.
NASA Astrophysics Data System (ADS)
Derode, B.; Riquelme, S.; Ruiz, J. A.; Leyton, F.; Campos, J. A.; Delouis, B.
2014-12-01
The intermediate depth earthquakes of high moment magnitude (Mw ≥ 8) in Chile have had a relative greater impact in terms of damage, injuries and deaths, than thrust type ones with similar magnitude (e.g. 1939, 1950, 1965, 1997, 2003, and 2005). Some of them have been studied in details, showing paucity of aftershocks, down-dip tensional focal mechanisms, high stress-drop and subhorizontal rupture. At present, their physical mechanism remains unclear because ambient temperatures and pressures are expected to lead to ductile, rather than brittle deformation. We examine source characteristics of more than 100 intraslab intermediate depth earthquakes using local and regional waveforms data obtained from broadband and accelerometers stations of IPOC network in northern Chile. With this high quality database, we estimated the total radiated energy from the energy flux carried by P and S waves integrating this flux in time and space, and evaluated their seismic moment directly from both spectral amplitude and near-field waveform inversion methods. We estimated the three parameters Ea, τa and M0 because their estimates entail no model dependence. Interestingly, the seismic nest studied using near-field re-location and only data from stations close to the source (D<250km) appears to not be homogeneous in terms of depths, displaying unusual seismic gaps along the Wadati-Benioff zone. Moreover, as confirmed by other studies of intermediate-depth earthquakes in subduction zones, very high stress drop ( >> 10MPa) and low radiation efficiency in this seismic nest were found. These unusual seismic parameter values can be interpreted as the expression of the loose of a big quantity of the emitted energy by heating processes during the rupture. Although it remains difficult to conclude about the processes of seismic nucleation, we present here results that seem to support a thermal weakening behavior of the fault zones and the existence of thermal stress processes like thermal shear runaway as a preferred mechanism for intermediate earthquake triggering. Despite the non-exhaustive aspect of this study, data presented here lead to the necessity of new systematic near-field studies to obtain valuable conclusions and constrain more accurately the physics of rupture mechanisms of these intermediate-depth seismic event.
Seismic gaps and source zones of recent large earthquakes in coastal Peru
Dewey, J.W.; Spence, W.
1979-01-01
The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station adjustments computed by the method of joint hypocenter determination. ?? 1979 Birkha??user Verlag.
Ischia Island: Historical Seismicity and Dynamics
NASA Astrophysics Data System (ADS)
Carlino, S.; Cubellis, E.; Iannuzzi, R.; Luongo, G.; Obrizzo, F.
2003-04-01
The seismic energy release in volcanic areas is a complex process and the island of Ischia provides a significant scenario of historical seismicity. This is characterized by the occurence of earthquakes with low energy and high intensity. Information on the seismicity of the island spans about eight centuries, starting from 1228. With regard to effects, the most recent earthquake of 1883 is extensively documented both in the literature and unpublished sources. The earthquake caused 2333 deaths and the destruction of the historical and environmental heritage of some areas of the island. The most severe damage occurred in Casamicciola. This event, which was the first great catastrophe after the unification of Italy in the 1860s (Imax = XI degree MCS), represents an important date in the prevention of natural disasters, in that it was after this earthquake that the first Seismic Safety Act in Italy was passed by which lower risk zones were identified for new settlements. Thanks to such detailed analysis, reliable modelling of the seismic source was also obtained. The historical data onwards makes it possible to identify the area of the epicenter of all known earthquakes as the northern slope of Monte Epomeo, while analysis of the effects of earthquakes and the geological structures allows us to evaluate the stress fields that generate the earthquakes. In a volcanic area, interpretation of the mechanisms of release and propagation of seismic energy is made even more complex as the stress field that acts at a regional level is compounded by that generated from migration of magmatic masses towards the surface, as well as the rheologic properties of the rocks dependent on the high geothermic gradient. Such structural and dynamic conditions make the island of Ischia a seismic area of considerable interest. It would appear necessary to evaluate the expected damage caused by a new event linked to the renewal of dynamics of the island, where high population density and the high economic value concerned, the island is a tourist destination and holiday resort, increase the seismic risk. A seismic hazard map of the island is proposed according to a comparative analysis of various types of data: the geology, tectonics, historical seismicity and damage caused by the 28 July 1883 Casamicciola earthquake. The analysis was essentially based on a GIS-aided cross-correlation of these data. The GIS is thus able to provide support both for in-depth analysis of the dynamic processes on the island and extend the assessment to other natural risks (volcanic, landslides, flooding, etc.).
The effect of segmented fault zones on earthquake rupture propagation and termination
NASA Astrophysics Data System (ADS)
Huang, Y.
2017-12-01
A fundamental question in earthquake source physics is what can control the nucleation and termination of an earthquake rupture. Besides stress heterogeneities and variations in frictional properties, damaged fault zones (DFZs) that surround major strike-slip faults can contribute significantly to earthquake rupture propagation. Previous earthquake rupture simulations usually characterize DFZs as several-hundred-meter-wide layers with lower seismic velocities than host rocks, and find earthquake ruptures in DFZs can exhibit slip pulses and oscillating rupture speeds that ultimately enhance high-frequency ground motions. However, real DFZs are more complex than the uniform low-velocity structures, and show along-strike variations of damages that may be correlated with historical earthquake ruptures. These segmented structures can either prohibit or assist rupture propagation and significantly affect the final sizes of earthquakes. For example, recent dense array data recorded at the San Jacinto fault zone suggests the existence of three prominent DFZs across the Anza seismic gap and the south section of the Clark branch, while no prominent DFZs were identified near the ends of the Anza seismic gap. To better understand earthquake rupture in segmented fault zones, we will present dynamic rupture simulations that calculate the time-varying rupture process physically by considering the interactions between fault stresses, fault frictional properties, and material heterogeneities. We will show that whether an earthquake rupture can break through the intact rock outside the DFZ depend on the nucleation size of the earthquake and the rupture propagation distance in the DFZ. Moreover, material properties of the DFZ, stress conditions along the fault, and friction properties of the fault also have a critical impact on rupture propagation and termination. We will also present scenarios of San Jacinto earthquake ruptures and show the parameter space that is favorable for rupture propagation through the Anza seismic gap. Our results suggest that a priori knowledge of properties of segmented fault zones is of great importance for predicting sizes of future large earthquakes on major faults.
NASA Astrophysics Data System (ADS)
Monsalve-Jaramillo, Hugo; Valencia-Mina, William; Cano-Saldaña, Leonardo; Vargas, Carlos A.
2018-05-01
Source parameters of four earthquakes located within the Wadati-Benioff zone of the Nazca plate subducting beneath the South American plate in Colombia were determined. The seismic moments for these events were recalculated and their approximate equivalent rupture area, slip distribution and stress drop were estimated. The source parameters for these earthquakes were obtained by deconvolving multiple events through teleseismic analysis of body waves recorded in long period stations and with simultaneous inversion of P and SH waves. The calculated source time functions for these events showed different stages that suggest that these earthquakes can reasonably be thought of being composed of two subevents. Even though two of the overall focal mechanisms obtained yielded similar results to those reported by the CMT catalogue, the two other mechanisms showed a clear difference compared to those officially reported. Despite this, it appropriate to mention that the mechanisms inverted in this work agree well with the expected orientation of faulting at that depth as well as with the wave forms they are expected to produce. In some of the solutions achieved, one of the two subevents exhibited a focal mechanism considerably different from the total earthquake mechanism; this could be interpreted as the result of a slight deviation from the overall motion due the complex stress field as well as the possibility of a combination of different sources of energy release analogous to the ones that may occur in deeper earthquakes. In those cases, the subevents with very different focal mechanism compared to the total earthquake mechanism had little contribution to the final solution and thus little contribution to the total amount of energy released.
Earthquake-induced ground failures in Italy from a reviewed database
NASA Astrophysics Data System (ADS)
Martino, S.; Prestininzi, A.; Romeo, R. W.
2013-05-01
A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground-level changes triggered by earthquakes of Mercalli intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (URL: http://www.ceri.uniroma1.it/cn/index.do?id=230&page=55) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the "Sapienza" University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground-level changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.
Temporal Variation of Tectonic Tremor Activity Associated with Nearby Earthquakes
NASA Astrophysics Data System (ADS)
Chao, K.; Van der Lee, S.; Hsu, Y. J.; Pu, H. C.
2017-12-01
Tectonic tremor and slow slip events, located downdip from the seismogenic zone, hold the key to recurring patterns of typical earthquakes. Several findings of slow aseismic slip during the prenucletion processes of nearby earthquakes have provided new insight into the study of stress transform of slow earthquakes in fault zones prior to megathrust earthquakes. However, how tectonic tremor is associated with the occurrence of nearby earthquakes remains unclear. To enhance our understanding of the stress interaction between tremor and earthquakes, we developed an algorithm for the automatic detection and location of tectonic tremor in the collisional tectonic environment in Taiwan. Our analysis of a three-year data set indicates a short-term increase in the tremor rate starting at 19 days before the 2010 ML6.4 Jiashian main shock (Chao et al., JGR, 2017). Around the time when the tremor rate began to rise, one GPS station recorded a flip in its direction of motion. We hypothesize that tremor is driven by a slow-slip event that preceded the occurrence of the shallower nearby main shock, even though the inferred slip is too small to be observed by all GPS stations. To better quantify what the necessary condition for tremor to response to nearby earthquakes is, we obtained a 13-year ambient tremor catalog from 2004 to 2016 in the same region. We examine the spatiotemporal relationship between tremor and 37 ML>=5.0 (seven events with ML>=6.0) nearby earthquakes located within 0.5 degrees to the active tremor sources. The findings from this study can enhance our understanding of the interaction among tremor, slow slip, and nearby earthquakes in the high seismic hazard regions.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2007-07-10
The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less
A Bayesian Approach to Real-Time Earthquake Phase Association
NASA Astrophysics Data System (ADS)
Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.
2014-12-01
Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.
NASA Astrophysics Data System (ADS)
Kästle, Emanuel D.; Soomro, Riaz; Weemstra, Cornelis; Boschi, Lapo; Meier, Thomas
2016-12-01
Phase velocities derived from ambient-noise cross-correlation are compared with phase velocities calculated from cross-correlations of waveform recordings of teleseismic earthquakes whose epicentres are approximately on the station-station great circle. The comparison is conducted both for Rayleigh and Love waves using over 1000 station pairs in central Europe. We describe in detail our signal-processing method which allows for automated processing of large amounts of data. Ambient-noise data are collected in the 5-80 s period range, whereas teleseismic data are available between about 8 and 250 s, resulting in a broad common period range between 8 and 80 s. At intermediate periods around 30 s and for shorter interstation distances, phase velocities measured from ambient noise are on average between 0.5 per cent and 1.5 per cent lower than those observed via the earthquake-based method. This discrepancy is small compared to typical phase-velocity heterogeneities (10 per cent peak-to-peak or more) observed in this period range.We nevertheless conduct a suite of synthetic tests to evaluate whether known biases in ambient-noise cross-correlation measurements could account for this discrepancy; we specifically evaluate the effects of heterogeneities in source distribution, of azimuthal anisotropy in surface-wave velocity and of the presence of near-field, rather than far-field only, sources of seismic noise. We find that these effects can be quite important comparing individual station pairs. The systematic discrepancy is presumably due to a combination of factors, related to differences in sensitivity of earthquake versus noise data to lateral heterogeneity. The data sets from both methods are used to create some preliminary tomographic maps that are characterized by velocity heterogeneities of similar amplitude and pattern, confirming the overall agreement between the two measurement methods.
Kirby, Stephen; Scholl, David; von Huene, Roland E.; Wells, Ray
2013-01-01
Tsunami modeling has shown that tsunami sources located along the Alaska Peninsula segment of the Aleutian-Alaska subduction zone have the greatest impacts on southern California shorelines by raising the highest tsunami waves for a given source seismic moment. The most probable sector for a Mw ~ 9 source within this subduction segment is between Kodiak Island and the Shumagin Islands in what we call the Semidi subduction sector; these bounds represent the southwestern limit of the 1964 Mw 9.2 Alaska earthquake rupture and the northeastern edge of the Shumagin sector that recent Global Positioning System (GPS) observations indicate is currently creeping. Geological and geophysical features in the Semidi sector that are thought to be relevant to the potential for large magnitude, long-rupture-runout interplate thrust earthquakes are remarkably similar to those in northeastern Japan, where the destructive Mw 9.1 tsunamigenic earthquake of 11 March 2011 occurred. In this report we propose and justify the selection of a tsunami source seaward of the Alaska Peninsula for use in the Tsunami Scenario that is part of the U.S. Geological Survey (USGS) Science Application for Risk Reduction (SAFRR) Project. This tsunami source should have the potential to raise damaging tsunami waves on the California coast, especially at the ports of Los Angeles and Long Beach. Accordingly, we have summarized and abstracted slip distribution from the source literature on the 2011 event, the best characterized for any subduction earthquake, and applied this synoptic slip distribution to the similar megathrust geometry of the Semidi sector. The resulting slip model has an average slip of 18.6 m and a moment magnitude of Mw = 9.1. The 2011 Tohoku earthquake was not anticipated, despite Japan having the best seismic and geodetic networks in the world and the best historical record in the world over the past 1,500 years. What was lacking was adequate paleogeologic data on prehistoric earthquakes and tsunamis, a data gap that also presently applies to the Alaska Peninsula and the Aleutian Islands. Quantitative appraisal of potential tsunami sources in Alaska requires such investigations.
Seismological investigation of September 09 2016, North Korea underground nuclear test
NASA Astrophysics Data System (ADS)
Gaber, H.; Elkholy, S.; Abdelazim, M.; Hamama, I. H.; Othman, A. S.
2017-12-01
On Sep. 9, 2016, a seismic event of mb 5.3 took place in North Korea. This event was reported as a nuclear test. In this study, we applied a number of discriminant techniques that facilitate the ability to distinguish between explosions and earthquakes on the Korean Peninsula. The differences between explosions and earthquakes are due to variation in source dimension, epicenter depth and source mechanism, or a collection of them. There are many seismological differences between nuclear explosions and earthquakes, but not all of them are detectable at large distances or are appropriate to each earthquake and explosion. The discrimination methods used in the current study include the seismic source location, source depth, the differences in the frequency contents, complexity versus spectral ratio and Ms-mb differences for both earthquakes and explosions. Sep. 9, 2016, event is located in the region of North Korea nuclear test site at a zero depth, which is likely to be a nuclear explosion. Comparison between the P wave spectra of the nuclear test and the Sep. 8, 2000, North Korea earthquake, mb 4.9 shows that the spectrum of both events is nearly the same. The results of applying the theoretical model of Brune to P wave spectra of both explosion and earthquake show that the explosion manifests larger corner frequency than the earthquake, reflecting the nature of the different sources. The complexity and spectral ratio were also calculated from the waveform data recorded at a number of stations in order to investigate the relation between them. The observed classification percentage of this method is about 81%. Finally, the mb:Ms method is also investigated. We calculate mb and Ms for the Sep. 9, 2016, explosion and compare the result with the mb: Ms chart obtained from the previous studies. This method is working well with the explosion.
NASA Astrophysics Data System (ADS)
Mendoza, Carlos
1993-05-01
The distributions and depths of coseismic slip are derived for the October 25, 1981 Playa Azul and September 21, 1985 Zihuatanejo earthquakes in western Mexico by inverting the recorded teleseismic body waves. Rupture during the Playa Azul earthquake appears to have occurred in two separate zones both updip and downdip of the point of initial nucleation, with most of the slip concentrated in a circular region of 15-km radius downdip from the hypocenter. Coseismic slip occurred entirely within the area of reduced slip between the two primary shallow sources of the Michoacan earthquake that occurred on September 19, 1985, almost 4 years later. The slip of the Zihuatanejo earthquake was concentrated in an area adjacent to one of the main sources of the Michoacan earthquake and appears to be the southeastern continuation of rupture along the Cocos-North America plate boundary. The zones of maximum slip for the Playa Azul, Zihuatanejo, and Michoacan earthquakes may be considered asperity regions that control the occurrence of large earthquakes along the Michoacan segment of the plate boundary.
An integrated investigation of the induced seismicity near Crooked Lake, Alberta, Canada in 2016
NASA Astrophysics Data System (ADS)
Wang, R.; Gu, Y. J.; Shen, J.; Schultz, R.
2016-12-01
In the past three years, the Crooked Lake (or Fox Creek) region has become one of the most seismically active areas in the Western Canada Sedimentary Basin (WCSB), mostly attributable to hydraulic-fracturing operations on shale gas. Among the human-related earthquakes, the January 12, 2016 event (M = 4.1) not only triggered the "red light" provincial protocol, leading to the temporary suspension of a near-by injection well, but also set a new magnitude record for earthquakes in Alberta in the last decade. In this study, we determine the source parameters (e.g., magnitude, hypocenter location) of this earthquake and its aftershocks using full moment tensor inversions. Our findings are consistent with the anthropogenic origin of this earthquake and the source solution of the main shock shows a strike-slip mechanism with limited non-double-couple components ( 22%). The candidate fault orientations, which are predominantly N-S and E-W trending, are consistent with those of earlier events in this region but different from induced events in other parts in the WCSB. The inferred compressional axis is supported by crustal stress orientations extracted from bore-hole breakouts and the right-lateral fault is preferred by both seismic and aeromagnetic data. A further analysis of the waveforms from the near-source stations (<10 km) detected nearly 100 pre-/aftershocks within a week of this earthquake. Systematic differences in the waveforms between earthquake multiples before and after the master event suggest moderate changes of seismic velocity structures at the injection depth around the source area, possibly a reflection of fluid migration and/or changes in stress field. In short, our integrated study on the January 2016 earthquake cluster offers critical insights on the nature of induced earthquakes in the Crooked Lake region and other parts of the WCSB.
NASA Astrophysics Data System (ADS)
OpršAl, Ivo; FäH, Donat; Mai, P. Martin; Giardini, Domenico
2005-04-01
The Basel earthquake of 18 October 1356 is considered one of the most serious earthquakes in Europe in recent centuries (I0 = IX, M ≈ 6.5-6.9). In this paper we present ground motion simulations for earthquake scenarios for the city of Basel and its vicinity. The numerical modeling combines the finite extent pseudodynamic and kinematic source models with complex local structure in a two-step hybrid three-dimensional (3-D) finite difference (FD) method. The synthetic seismograms are accurate in the frequency band 0-2.2 Hz. The 3-D FD is a linear explicit displacement formulation using an irregular rectangular grid including topography. The finite extent rupture model is adjacent to the free surface because the fault has been recognized through trenching on the Reinach fault. We test two source models reminiscent of past earthquakes (the 1999 Athens and the 1989 Loma Prieta earthquake) to represent Mw ≈ 5.9 and Mw ≈ 6.5 events that occur approximately to the south of Basel. To compare the effect of the same wave field arriving at the site from other directions, we considered the same sources placed east and west of the city. The local structural model is determined from the area's recently established P and S wave velocity structure and includes topography. The selected earthquake scenarios show strong ground motion amplification with respect to a bedrock site, which is in contrast to previous 2-D simulations for the same area. In particular, we found that the edge effects from the 3-D structural model depend strongly on the position of the earthquake source within the modeling domain.
NASA Astrophysics Data System (ADS)
Amertha Sanjiwani, I. D. M.; En, C. K.; Anjasmara, I. M.
2017-12-01
A seismic gap on the interface along the Sunda subduction zone has been proposed among the 2000, 2004, 2005 and 2007 great earthquakes. This seismic gap therefore plays an important role in the earthquake risk on the Sunda trench. The Mw 7.6 Padang earthquake, an intraslab event, was occurred on September 30, 2009 located at ± 250 km east of the Sunda trench, close to the seismic gap on the interface. To understand the interaction between the seismic gap and the Padang earthquake, twelves continuous GPS data from SUGAR are adopted in this study to estimate the source model of this event. The daily GPS coordinates one month before and after the earthquake were calculated by the GAMIT software. The coseismic displacements were evaluated based on the analysis of coordinate time series in Padang region. This geodetic network provides a rather good spatial coverage for examining the seismic source along the Padang region in detail. The general pattern of coseismic horizontal displacements is moving toward epicenter and also the trench. The coseismic vertical displacement pattern is uplift. The highest coseismic displacement derived from the MSAI station are 35.0 mm for horizontal component toward S32.1°W and 21.7 mm for vertical component. The second largest one derived from the LNNG station are 26.6 mm for horizontal component toward N68.6°W and 3.4 mm for vertical component. Next, we will use uniform stress drop inversion to invert the coseismic displacement field for estimating the source model. Then the relationship between the seismic gap on the interface and the intraslab Padang earthquake will be discussed in the next step. Keyword: seismic gap, Padang earthquake, coseismic displacement.
Shallow seismicity in volcanic system: what role does the edifice play?
NASA Astrophysics Data System (ADS)
Bean, Chris; Lokmer, Ivan
2017-04-01
Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.
NASA Astrophysics Data System (ADS)
Allstadt, Kate
The following work is focused on the use of both traditional and novel seismological tools, combined with concepts from other disciplines, to investigate shallow seismic sources and hazards. The study area is the dynamic landscape of the Pacific Northwest and its wide-ranging earthquake, landslide, glacier, and volcano-related hazards. The first chapter focuses on landsliding triggered by earthquakes, with a shallow crustal earthquake in Seattle as a case study. The study demonstrates that utilizing broadband synthetic seismograms and rigorously incorporating 3D basin amplification, 1D site effects, and fault directivity, allows for a more complete assessment of regional seismically induced landslide hazard. The study shows that the hazard is severe for Seattle, and provides a framework for future probabilistic maps and near real-time hazard assessment. The second chapter focuses on landslides that generate seismic waves and how these signals can be harnessed to better understand landslide dynamics. This is demonstrated using two contrasting Pacific Northwest landslides. The 2010 Mount Meager, BC, landslide generated strong long period waves. New full waveform inversion methods reveal the time history of forces the landslide exerted on the earth that is used to quantify event dynamics. Despite having a similar volume (˜107 m3), The 2009 Nile Valley, WA, landslide did not generate observable long period motions because of its smaller accelerations, but pulses of higher frequency waves were valuable in piecing together the complex sequence of events. The final chapter details the difficulties of monitoring glacier-clad volcanoes. The focus is on small, repeating, low-frequency earthquakes at Mount Rainier that resemble volcanic earthquakes. However, based on this investigation, they are actually glacial in origin: most likely stick-slip sliding of glaciers triggered by snow loading. Identification of the source offers a view of basal glacier processes, discriminates against alarming volcanic noises, and has implications for repeating earthquakes in tectonic environments. This body of work demonstrates that by combining methods and concepts from seismology and other disciplines in new ways, we can obtain a better understanding and a fresh perspective of the physics behind the shallow seismic sources and hazards that threaten the Pacific Northwest.
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Seismic hazard analysis for Jayapura city, Papua
NASA Astrophysics Data System (ADS)
Robiana, R.; Cipta, A.
2015-04-01
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock type and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 - 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
NASA Astrophysics Data System (ADS)
Somei, K.; Asano, K.; Iwata, T.; Miyakoshi, K.
2012-12-01
After the 1995 Kobe earthquake, many M7-class inland earthquakes occurred in Japan. Some of those events (e.g., the 2004 Chuetsu earthquake) occurred in a tectonic zone which is characterized as a high strain rate zone by the GPS observation (Sagiya et al., 2000) or dense distribution of active faults. That belt-like zone along the coast in Japan Sea side of Tohoku and Chubu districts, and north of Kinki district, is called as the Niigata-Kobe tectonic zone (NKTZ, Sagiya et al, 2000). We investigate seismic scaling relationship for recent inland crustal earthquake sequences in Japan and compare source characteristics between events occurring inside and outside of NKTZ. We used S-wave coda part for estimating source spectra. Source spectral ratio is obtained by S-wave coda spectral ratio between the records of large and small events occurring close to each other from nation-wide strong motion network (K-NET and KiK-net) and broad-band seismic network (F-net) to remove propagation-path and site effects. We carefully examined the commonality of the decay of coda envelopes between event-pair records and modeled the observed spectral ratio by the source spectral ratio function with assuming omega-square source model for large and small events. We estimated the corner frequencies and seismic moment (ratio) from those modeled spectral ratio function. We determined Brune's stress drops of 356 events (Mw: 3.1-6.9) in ten earthquake sequences occurring in NKTZ and six sequences occurring outside of NKTZ. Most of source spectra obey omega-square source spectra. There is no obvious systematic difference between stress drops of events in NKTZ zone and others. We may conclude that the systematic tendency of seismic source scaling of the events occurred inside and outside of NKTZ does not exist and the average source scaling relationship can be effective for inland crustal earthquakes. Acknowledgements: Waveform data were provided from K-NET, KiK-net and F-net operated by National Research Institute for Earth Science and Disaster Prevention Japan. This study is supported by Multidisciplinary research project for Niigata-Kobe tectonic zone promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
... Single-Source Grant to Support Services for Haitian Medical Evacuees to the Florida Department of...: Notice to award a single-source grant to support medical evacuees from the Haiti earthquake of 2010. CFDA... supportive social services to Haitian medical evacuees affected by the earthquake in 2010. The Haitian...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, W R; Mayeda, K; Malagnini, L
2007-02-01
We develop a new methodology to determine apparent attenuation for the regional seismic phases Pn, Pg, Sn, and Lg using coda-derived source spectra. The local-to-regional coda methodology (Mayeda, 1993; Mayeda and Walter, 1996; Mayeda et al., 2003) is a very stable way to obtain source spectra from sparse networks using as few as one station, even if direct waves are clipped. We develop a two-step process to isolate the frequency-dependent Q. First, we correct the observed direct wave amplitudes for an assumed geometrical spreading. Next, an apparent Q, combining path and site attenuation, is determined from the difference between themore » spreading-corrected amplitude and the independently determined source spectra derived from the coda methodology. We apply the technique to 50 earthquakes with magnitudes greater than 4.0 in central Italy as recorded by MEDNET broadband stations around the Mediterranean at local-to-regional distances. This is an ideal test region due to its high attenuation, complex propagation, and availability of many moderate sized earthquakes. We find that a power law attenuation of the form Q(f) = Q{sub 0}f{sup Y} fit all the phases quite well over the 0.5 to 8 Hz band. At most stations, the measured apparent Q values are quite repeatable from event to event. Finding the attenuation function in this manner guarantees a close match between inferred source spectra from direct waves and coda techniques. This is important if coda and direct wave amplitudes are to produce consistent seismic results.« less
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi
2014-01-01
We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.
Toward tsunami early warning system in Indonesia by using rapid rupture durations estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madlazim
2012-06-20
Indonesia has Indonesian Tsunami Early Warning System (Ina-TEWS) since 2008. The Ina-TEWS has used automatic processing on hypocenter; Mwp, Mw (mB) and Mj. If earthquake occurred in Ocean, depth < 70 km and magnitude > 7, then Ina-TEWS announce early warning that the earthquake can generate tsunami. However, the announcement of the Ina-TEWS is still not accuracy. Purposes of this research are to estimate earthquake rupture duration of large Indonesia earthquakes that occurred in Indian Ocean, Java, Timor sea, Banda sea, Arafura sea and Pasific ocean. We analyzed at least 330 vertical seismogram recorded by IRIS-DMC network using a directmore » procedure for rapid assessment of earthquake tsunami potential using simple measures on P-wave vertical seismograms on the velocity records, and the likelihood that the high-frequency, apparent rupture duration, T{sub dur}. T{sub dur} can be related to the critical parameters rupture length (L), depth (z), and shear modulus ({mu}) while T{sub dur} may be related to wide (W), slip (D), z or {mu}. Our analysis shows that the rupture duration has a stronger influence to generate tsunami than Mw and depth. The rupture duration gives more information on tsunami impact, Mo/{mu}, depth and size than Mw and other currently used discriminants. We show more information which known from the rupture durations. The longer rupture duration, the shallower source of the earthquake. For rupture duration greater than 50 s, the depth less than 50 km, Mw greater than 7, the longer rupture length, because T{sub dur} is proportional L and greater Mo/{mu}. Because Mo/{mu} is proportional L. So, with rupture duration information can be known information of the four parameters. We also suggest that tsunami potential is not directly related to the faulting type of source and for events that have rupture duration greater than 50 s, the earthquakes generated tsunami. With available real-time seismogram data, rapid calculation, rupture duration discriminant can be completed within 4-5 min after an earthquake occurs and thus can aid in effective, accuracy and reliable tsunami early warning for Indonesia region.« less
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
Technical guidelines for the implementation of the Advanced National Seismic System
Committee, ANSS Technical Integration
2002-01-01
The Advanced National Seismic System (ANSS) is a major national initiative led by the US Geological Survey that serves the needs of the earthquake monitoring, engineering, and research communities as well as national, state, and local governments, emergency response organizations, and the general public. Legislation authorizing the ANSS was passed in 2000, and low levels of funding for planning and initial purchases of new seismic instrumentation have been appropriated beginning in FY2000. When fully operational, the ANSS will be an advanced monitoring system (modern digital seismographs and accelerographs, communications networks, data collection and processing centers, and well-trained personnel) distributed across the United States that operates with high performance standards, gathers critical technical data, and effectively provides timely and reliable earthquake products, information, and services to meet the Nation’s needs. The ANSS will automatically broadcast timely and authoritative products describing the occurrence of earthquakes, earthquake source properties, the distribution of ground shaking, and, where feasible, broadcast early warnings and alerts for the onset of strong ground shaking. Most importantly, the ANSS will provide earthquake data, derived products, and information to the public, emergency responders, officials, engineers, educators, researchers, and other ANSS partners rapidly and in forms that are useful for their needs.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2008-07-08
The objectives of this study are to improve low-magnitude (concentrating on M2.5-5) regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge at small magnitudes (i.e., m{sub b}more » < {approx} 4.0) is poorly resolved, and source scaling remains a subject of on-going debate in the earthquake seismology community. Recently there have been a number of empirical studies suggesting scaling of micro-earthquakes is non-self-similar, yet there are an equal number of compelling studies that would suggest otherwise. It is not clear whether different studies obtain different results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods that both make use of empirical Green's function (EGF) earthquakes to remove path effects. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But finding well recorded earthquakes with 'perfect' EGF events for direct wave analysis is difficult, limits the number of earthquakes that can be studied. We begin with closely-located, well-correlated earthquakes. We use a multi-taper method to obtain time-domain source-time-functions by frequency division. We only accept an earthquake and EGF pair if they are able to produce a clear, time-domain source pulse. We fit the spectral ratios and perform a grid-search about the preferred parameters to ensure the fits are well constrained. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We analyze three clusters of aftershocks from the well-recorded sequence following the M5 Au Sable Forks, NY, earthquake to obtain some of the first accurate source parameters for small earthquakes in eastern North America. Each cluster contains a M{approx}2, and two contain M{approx}3, as well as smaller aftershocks. We find that the corner frequencies and stress drops are high (averaging 100 MPa) confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We also demonstrate that a scaling breakdown suggested by earlier work is simply an artifact of their more band-limited data. We calculate radiated energy, and find that the ratio of Energy to seismic Moment is also high, around 10{sup -4}. We estimate source parameters for the M5 mainshock using similar methods, but our results are more doubtful because we do not have a EGF event that meets our preferred criteria. The stress drop and energy/moment ratio for the mainshock are slightly higher than for the aftershocks. Our improved, and simplified coda wave analysis method uses spectral ratios (as for the direct waves) but relies on the averaging nature of the coda waves to use EGF events that do not meet the strict criteria of similarity required for the direct wave analysis. We have applied the coda wave spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks, and also to several sequences in Italy with M{approx}6 mainshocks. The Italian earthquakes have higher stress drops than the Hector Mine sequence, but lower than Au Sable Forks. These results show a departure from self-similarity, consistent with previous studies using similar regional datasets. The larger earthquakes have higher stress drops and energy/moment ratios. We perform a preliminary comparison of the two methods using the M5 Au Sable Forks earthquake. Both methods give very consistent results, and we are applying the comparison to further events.« less
NASA Astrophysics Data System (ADS)
Park, J. H.; Park, Y. K.; Kim, T. S.; Kim, G.; Cho, C.; Kim, I.
2017-12-01
North Korea(NK) has conducted the 6th Underground Nuclear Test(UNT) with the one order bigger magnitude than previous ones on 3 Sep. 2017. By using correlated waveform comparison the estimated epicenter of the 6th NK UNT was estimated at 41.3020N 129.0795E located about 200 m toward northern direction from the previous 5th NK UNT site. The body wave magnitude was calculated as mb 5.7 through our routine process measuring the maximum amplitude of P wave in the higher frequency over 1 Hz using stations around the Korean peninsula, however, this could be underestimated in the case that the source energy spectra of UNT radiated dominantly in the lower frequency below 1 Hz. Considering source spectra of the 6th NK UNT, we applied to P wave the 2nd order Butterworth bandpass filter between 0.1 and 1 Hz and measured that the amplitude ratio of 6th/5th UNT. Instead of 6 7 ratio from the raw P waves, the filtered amplitude ratio resulted in 10 12 at several stations. After cross check of the amplitude ratio in bandpass filtered method to the previous NK UNT we finalized the magnitude of the 6th NK UNT as mb 6.1. The collapse earthquake has happened after the 6th NK UNT about 8 minutes 32 seconds and the epicenter estimated to be located around the UNT site within 1 km. The similarity of wave forms to that of the two mine collapse cases in South Korea and moment tensor inversion indicated the source mechanism was very similar to the mine collapse. Three earthquakes were detected and analyzed locations and magnitudes, we thought these earthquakes were induced from the accumulated tectonic stress by the NK UNT. The collapse event's wave forms are very different from those of the induced earthquakes.
The evolving interaction of low-frequency earthquakes during transient slip.
Frank, William B; Shapiro, Nikolaï M; Husker, Allen L; Kostoglodov, Vladimir; Gusev, Alexander A; Campillo, Michel
2016-04-01
Observed along the roots of seismogenic faults where the locked interface transitions to a stably sliding one, low-frequency earthquakes (LFEs) primarily occur as event bursts during slow slip. Using an event catalog from Guerrero, Mexico, we employ a statistical analysis to consider the sequence of LFEs at a single asperity as a point process, and deduce the level of time clustering from the shape of its autocorrelation function. We show that while the plate interface remains locked, LFEs behave as a simple Poisson process, whereas they become strongly clustered in time during even the smallest slow slip, consistent with interaction between different LFE sources. Our results demonstrate that bursts of LFEs can result from the collective behavior of asperities whose interaction depends on the state of the fault interface.
NASA Astrophysics Data System (ADS)
Mourhatch, Ramses
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis. As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California. Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2s-2.0s) empirical Green's function synthetics on top of long-period (> 2.0s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
NASA Astrophysics Data System (ADS)
Asano, K.; Iwata, T.
2014-12-01
After the 2011 Tohoku earthquake in Japan (Mw9.0), many papers on the source model of this mega subduction earthquake have been published. From our study on the modeling of strong motion waveforms in the period 0.1-10s, four isolated strong motion generation areas (SMGAs) were identified in the area deeper than 25 km (Asano and Iwata, 2012). The locations of these SMGAs were found to correspond to the asperities of M7-class events in 1930's. However, many studies on kinematic rupture modeling using seismic, geodetic and tsunami data revealed that the existence of the large slip area from the trench to the hypocenter (e.g., Fujii et al., 2011; Koketsu et al., 2011; Shao et al., 2011; Suzuki et al., 2011). That is, the excitation of seismic wave is spatially different in long and short period ranges as is already discussed by Lay et al.(2012) and related studies. The Tohoku earthquake raised a new issue we have to solve on the relationship between the strong motion generation and the fault rupture process, and it is an important issue to advance the source modeling for future strong motion prediction. The previous our source model consists of four SMGAs, and observed ground motions in the period range 0.1-10s are explained well by this source model. We tried to extend our source model to explain the observed ground motions in wider period range with a simple assumption referring to the previous our study and the concept of the characterized source model (Irikura and Miyake, 2001, 2011). We obtained a characterized source model, which have four SMGAs in the deep part, one large slip area in the shallow part and background area with low slip. The seismic moment of this source model is equivalent to Mw9.0. The strong ground motions are simulated by the empirical Green's function method (Irikura, 1986). Though the longest period limit is restricted by the SN ratio of the EGF event (Mw~6.0) records, this new source model succeeded to reproduce the observed waveforms and Fourier amplitude spectra in the period range 0.1-50s. The location of this large slip area seems to overlap the source regions of historical events in 1793 and 1897 off Sanriku area. We think the source model for strong motion prediction of Mw9 event could be constructed by the combination of hierarchical multiple asperities or source patches related to histrorical events in this region.
Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment
NASA Astrophysics Data System (ADS)
Melgar Moctezuma, Diego
This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.
Localizing Submarine Earthquakes by Listening to the Water Reverberations
NASA Astrophysics Data System (ADS)
Castillo, J.; Zhan, Z.; Wu, W.
2017-12-01
Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.
Macroscopic Source Properties from Dynamic Rupture Styles in Plastic Media
NASA Astrophysics Data System (ADS)
Gabriel, A.; Ampuero, J. P.; Dalguer, L. A.; Mai, P. M.
2011-12-01
High stress concentrations at earthquake rupture fronts may generate an inelastic off-fault response at the rupture tip, leading to increased energy absorption in the damage zone. Furthermore, the induced asymmetric plastic strain field in in-plane rupture modes may produce bimaterial interfaces that can increase radiation efficiency and reduce frictional dissipation. Off-fault inelasticity thus plays an important role for realistic predictions of near-fault ground motion. Guided by our previous studies in the 2D elastic case, we perform rupture dynamics simulations including rate-and-state friction and off-fault plasticity to investigate the effects on the rupture properties. We quantitatively analyze macroscopic source properties for different rupture styles, ranging from cracks to pulses and subshear to supershear ruptures, and their transitional mechanisms. The energy dissipation due to off-fault inelasticity modifies the conditions to obtain each rupture style and alters macroscopic source properties. We examine apparent fracture energy, rupture and healing front speed, peak slip and peak slip velocity, dynamic stress drop and size of the process and plastic zones, slip and plastic seismic moment, and their connection to ground motion. This presentation focuses on the effects of rupture style and off-fault plasticity on the resulting ground motion patterns, especially on characteristic slip velocity function signatures and resulting seismic moments. We aim at developing scaling rules for equivalent elastic models, as function of background stress and frictional parameters, that may lead to improved "pseudo-dynamic" source parameterizations for ground-motion calculation. Moreover, our simulations provide quantitative relations between off-fault energy dissipation and macroscopic source properties. These relations might provide a self-consistent theoretical framework for the study of the earthquake energy balance based on observable earthquake source parameters.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Dense Array Studies of Volcano-Tectonic and Long-Period Earthquakes Beneath Mount St. Helens
NASA Astrophysics Data System (ADS)
Glasgow, M. E.; Hansen, S. M.; Schmandt, B.; Thomas, A.
2017-12-01
A 904 single-component 10-Hz geophone array deployed within 15 km of Mount St. Helens (MSH) in 2014 recorded continuously for two-weeks. Automated reverse-time imaging (RTI) was used to generate a catalog of 212 earthquakes. Among these, two distinct types of upper crustal (<8 km) earthquakes were classified. Volcano-tectonic (VT) and long-period (LP) earthquakes were identified using analysis of array spectrograms, envelope functions, and velocity waveforms. To remove analyst subjectivity, quantitative classification criteria were developed based on the ratio of power in high and low frequency bands and coda duration. Prior to the 2014 experiment, upper crustal LP earthquakes had only been reported at MSH during volcanic activity. Subarray beamforming was used to distinguish between LP earthquakes and surface generated LP signals, such as rockfall. This method confirmed 16 LP signals with horizontal velocities exceeding that of upper crustal P-wave velocities, which requires a subsurface hypocenter. LP and VT locations overlap in a cluster slightly east of the summit crater from 0-5 km below sea level. LP displacement spectra are similar to simple theoretical predictions for shear failure except that they have lower corner frequencies than VT earthquakes of similar magnitude. The results indicate a distinct non-resonant source for LP earthquakes which are located in the same source volume as some VT earthquakes (within hypocenter uncertainty of 1 km or less). To further investigate MSH microseismicity mechanisms, a 142 three-component (3-C) 5 Hz geophone array will record continuously for one month at MSH in Fall 2017 providing a unique dataset for a volcano earthquake source study. This array will help determine if LP occurrence in 2014 was transient or if it is still ongoing. Unlike the 2014 array, approximately 50 geophones will be deployed in the MSH summit crater directly over the majority of seismicity. RTI will be used to detect and locate earthquakes by back-projecting 3-C data with a local 3-D P and S velocity model. Earthquakes will be classified using the previously stated techniques, and we will seek to use the dense array of 3-C waveforms to invert for focal mechanisms and, ideally, moment tensor sources down to M0.
Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake
NASA Astrophysics Data System (ADS)
Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.
2011-12-01
It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not just for young scientists but also for people of all ages.
Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario
2018-02-01
Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.
NASA Astrophysics Data System (ADS)
Zheng, Y.
2015-12-01
On August 3, 2014, an Ms6.5 earthquake struck Ludian county, Zhaotong city in Yunnan province, China. Although this earthquake is not very big, it caused abnormal severe damages. Thus, study on the causes of the serious damages of this moderate strong earthquake may help us to evaluate seismic hazards for similar earthquakes. Besides the factors which directly relate to the damages, such as site effects, quality of buildings, seismogenic structures and the characteristics of the mainshock and the aftershocks may also responsible for the seismic hazards. Since focal mechanism solution and centroid depth provide key information of earthquake source properties and tectonic stress field, and the focal depth is one of the most important parameters which control the damages of earthquakes, obtaining precise FMSs and focal depths of the Ludian earthquake sequence may help us to determine the detailed geometric features of the rupture fault and the seismogenic environment. In this work we obtained the FMSs and centroid depths of the Ludian earthquake and its Ms>3.0 aftershocks by the revised CAP method, and further verified some focal depths using the depth phase method. Combining the FMSs of the mainshock and the strong aftershocks, as well as their spatial distributions, and the seismogenic environment of the source region, we can make the following characteristics of the Ludian earthquake sequence and its seismogenic structure: (1) The Ludian earthquake is a left-lateral strike slip earthquake, with magnitude of about Mw6.1. The FMS of nodal plane I is 75o/56o/180o for strike, dip and rake angles, and 165o/90o/34ofor the other nodal plane. (2) The Ludian earthquake is very shallow with the optimum centroid depth of ~3 km, which is consistent with the strong ground shaking and the surface rupture observed by field survey and strengthens the damages of the Ludian earthquake. (3) The Ludian Earthquake should occur on the NNW trend BXF. Because two later aftershocks occurred close to the fault zone of the ZLF, and their FMSs are similar with the characteristics of the ZLF, the shallower part of the ZLF may also rupture during the aftershock duration of the Ludian earthquake. Since the ZLF is much longer than the BXF, the seismic risk of the ZLF may be high and should be required more attention.
W phase source inversion for moderate to large earthquakes (1990-2010)
Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.
2012-01-01
Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the waveforms of the target event. To deal with such difficult situations, we remove the perturbation caused by earlier disturbing events by subtracting the corresponding synthetics from the data. The CMT parameters for the disturbed event can then be retrieved using the residual seismograms. We also explore the feasibility of obtaining source parameters of smaller earthquakes in the range 6.0 ≤Mw w= 6 or larger.
NASA Astrophysics Data System (ADS)
Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui
2018-02-01
The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.
Earthquake-driven erosion of organic carbon at the eastern margin of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Li, G.; West, A. J.; Hara, E. K.; Hammond, D. E.; Hilton, R. G.
2016-12-01
Large earthquakes can trigger massive landsliding that erodes particulate organic carbon (POC) from vegetation, soil and bedrocks, potentially linking seismotectonics to the global carbon cycle. Recent work (Wang et al., 2016, Geology) has highlighted a dramatic increase in riverine export of biospheric POC following the 2008 Mw7.9 Wenchuan earthquake, in the steep Longmen Shan mountain range at the eastern margin of the Tibetan Plateau. However, a complete, source-to-sink picture of POC erosion after the earthquake is still missing. Here we track POC transfer across the Longmen Shan range from high mountains to the downstream Zipingpu reservoir where riverine-exported POC has been trapped. Building on the work of Wang et al. (2016), who measured the compositions and fluxes of riverine POC, this study is focused on constraining the source and fate of the eroded POC after the earthquake. We have sampled landslide deposits and river sediment, and we have cored the Zipingpu reservoir, following a source-to-sink sampling strategy. We measured POC compositions and grain size of the sediment samples, mapped landslide-mobilized POC using maps of landslide inventory and biomass, and tracked POC loading from landslides to the reservoir sediment to constrain the fate of eroded OC. Constraints on carbon sources, fluxes and fate provide the foundation for constructing a post-earthquake POC budget. This work highlights the role of earthquakes in the mobilization and burial of POC, providing new insight into mechanisms linking tectonics and the carbon cycle and building understanding needed to interpret past seismicity from sedimentary archives.
NASA Astrophysics Data System (ADS)
Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.
2018-06-01
Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.
Quantifying and Qualifying USGS ShakeMap Uncertainty
Wald, David J.; Lin, Kuo-Wan; Quitoriano, Vincent
2008-01-01
We describe algorithms for quantifying and qualifying uncertainties associated with USGS ShakeMap ground motions. The uncertainty values computed consist of latitude/longitude grid-based multiplicative factors that scale the standard deviation associated with the ground motion prediction equation (GMPE) used within the ShakeMap algorithm for estimating ground motions. The resulting grid-based 'uncertainty map' is essential for evaluation of losses derived using ShakeMaps as the hazard input. For ShakeMap, ground motion uncertainty at any point is dominated by two main factors: (i) the influence of any proximal ground motion observations, and (ii) the uncertainty of estimating ground motions from the GMPE, most notably, elevated uncertainty due to initial, unconstrained source rupture geometry. The uncertainty is highest for larger magnitude earthquakes when source finiteness is not yet constrained and, hence, the distance to rupture is also uncertain. In addition to a spatially-dependant, quantitative assessment, many users may prefer a simple, qualitative grading for the entire ShakeMap. We developed a grading scale that allows one to quickly gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of the post-earthquake decision-making process or for qualitative assessments of archived or historical earthquake ShakeMaps. We describe an uncertainty letter grading ('A' through 'F', for high to poor quality, respectively) based on the uncertainty map. A middle-range ('C') grade corresponds to a ShakeMap for a moderate-magnitude earthquake suitably represented with a point-source location. Lower grades 'D' and 'F' are assigned for larger events (M>6) where finite-source dimensions are not yet constrained. The addition of ground motion observations (or observed macroseismic intensities) reduces uncertainties over data-constrained portions of the map. Higher grades ('A' and 'B') correspond to ShakeMaps with constrained fault dimensions and numerous stations, depending on the density of station/data coverage. Due to these dependencies, the letter grade can change with subsequent ShakeMap revisions if more data are added or when finite-faulting dimensions are added. We emphasize that the greatest uncertainties are associated with unconstrained source dimensions for large earthquakes where the distance term in the GMPE is most uncertain; this uncertainty thus scales with magnitude (and consequently rupture dimension). Since this distance uncertainty produces potentially large uncertainties in ShakeMap ground-motion estimates, this factor dominates over compensating constraints for all but the most dense station distributions.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.
NASA Astrophysics Data System (ADS)
Patlan, E.; Velasco, A.; Konter, J. G.
2010-12-01
The San Miguel volcano lies near the city of San Miguel, El Salvador (13.43N and - 88.26W). San Miguel volcano, an active stratovolcano, presents a significant natural hazard for the city of San Miguel. In general, the internal state and activity of volcanoes remains an important component to understanding volcanic hazard. The main technology for addressing volcanic hazards and processes is through the analysis of data collected from the deployment of seismic sensors that record ground motion. Six UTEP seismic stations were deployed around San Miguel volcano from 2007-2008 to define the magma chamber and assess the seismic and volcanic hazard. We utilize these data to develop images of the earth structure beneath the volcano, studying the volcanic processes by identifying different sources, and investigating the role of earthquakes and faults in controlling the volcanic processes. We initially locate events using automated routines and focus on analyzing local events. We then relocate each seismic event by hand-picking P-wave arrivals, and later refine these picks using waveform cross correlation. Using a double difference earthquake location algorithm (HypoDD), we identify a set of earthquakes that vertically align beneath the edifice of the volcano, suggesting that we have identified a magma conduit feeding the volcano. We also apply a double-difference earthquake tomography approach (tomoDD) to investigate the volcano’s plumbing system. Our preliminary results show the extent of the magma chamber that also aligns with some horizontal seismicity. Overall, this volcano is very active and presents a significant hazard to the region.
Structure-specific scalar intensity measures for near-source and ordinary earthquake ground motions
Luco, N.; Cornell, C.A.
2007-01-01
Introduced in this paper are several alternative ground-motion intensity measures (IMs) that are intended for use in assessing the seismic performance of a structure at a site susceptible to near-source and/or ordinary ground motions. A comparison of such IMs is facilitated by defining the "efficiency" and "sufficiency" of an IM, both of which are criteria necessary for ensuring the accuracy of the structural performance assessment. The efficiency and sufficiency of each alternative IM, which are quantified via (i) nonlinear dynamic analyses of the structure under a suite of earthquake records and (ii) linear regression analysis, are demonstrated for the drift response of three different moderate- to long-period buildings subjected to suites of ordinary and of near-source earthquake records. One of the alternative IMs in particular is found to be relatively efficient and sufficient for the range of buildings considered and for both the near-source and ordinary ground motions. ?? 2007, Earthquake Engineering Research Institute.
Database of potential sources for earthquakes larger than magnitude 6 in Northern California
,
1996-01-01
The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.
QuakeUp: An advanced tool for a network-based Earthquake Early Warning system
NASA Astrophysics Data System (ADS)
Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo
2017-04-01
The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Within times of the order of ten seconds from the earthquake origin, the information about the area where moderate to strong ground shaking is expected to occur, can be sent to inner and outer sites, allowing the activation of emergency measurements to protect people , secure industrial facilities and optimize the site resilience after the disaster. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. In QuakeUp, the P-wave parameters are continuously measured, using progressively expanded P-wave time windows, and providing evolutionary and reliable estimates of the ground shaking distribution, especially in the case of very large events. Furthermore, to minimize the S-wave contamination on the P-wave signal portion, an efficient algorithm, based on the real-time polarization analysis of the three-component seismogram, for the automatic detection of the S-wave arrival time has been included. The final output of QuakeUp will be an automatic alert message that is transmitted to sites to be secured during the earthquake emergency. The message contains all relevant information about the expected potential damage at the site and the time available for security actions (lead-time) after the warning. A global view of the system performance during and after the event (in play-back mode) is obtained through an end-user visual display, where the most relevant pieces of information will be displayed and updated as soon as new data are available. The software platform Quake-Up is essentially aimed at improving the reliability and the accuracy in terms of parameter estimation, minimizing the uncertainties in the real-time estimations without losing the essential requirements of speediness and robustness, which are needed to activate rapid emergency actions.
NASA Astrophysics Data System (ADS)
Bossu, R.; Mazet-Roux, G.; Roussel, F.; Frobert, L.
2011-12-01
Rapid characterisation of earthquake effects is essential for a timely and appropriate response in favour of victims and/or of eyewitnesses. In case of damaging earthquakes, any field observations that can fill the information gap characterising their immediate aftermath can contribute to more efficient rescue operations. This paper presents the last developments of a method called "flash-sourcing" addressing these issues. It relies on eyewitnesses, the first informed and the first concerned by an earthquake occurrence. More precisely, their use of the EMSC earthquake information website (www.emsc-csem.org) is analysed in real time to map the area where the earthquake was felt and identify, at least under certain circumstances zones of widespread damage. The approach is based on the natural and immediate convergence of eyewitnesses on the website who rush to the Internet to investigate cause of the shaking they just felt causing our traffic to increase The area where an earthquake was felt is mapped simply by locating Internet Protocol (IP) addresses during traffic surges. In addition, the presence of eyewitnesses browsing our website within minutes of an earthquake occurrence excludes the possibility of widespread damage in the localities they originate from: in case of severe damage, the networks would be down. The validity of the information derived from this clickstream analysis is confirmed by comparisons with EMS98 macroseismic map obtained from online questionnaires. The name of this approach, "flash-sourcing", is a combination of "flash-crowd" and "crowdsourcing" intending to reflect the rapidity of the data collation from the public. For computer scientists, a flash-crowd names a traffic surge on a website. Crowdsourcing means work being done by a "crowd" of people; It also characterises Internet and mobile applications collecting information from the public such as online macroseismic questionnaires. Like crowdsourcing techniques, flash-sourcing is a crowd-to-agency system, but unlike them it is not based on declarative information (e.g. answers to a questionnaire) but on implicit data, clickstream observed on our website. We present first the main improvements of the method, improved detection of traffic surges, and a way to instantly map areas affected by severe damage or network disruptions. The second part describes how the derived information improves and fastens public earthquake information and, beyond seismology, what it can teach us on public behaviour when facing an earthquake. Finally, the discussion will focus on the future evolutions and how flash-sourcing could ultimately improve earthquake response.
Connecting slow earthquakes to huge earthquakes.
Obara, Kazushige; Kato, Aitaro
2016-07-15
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.
A strain behavior before and after the 2009 Suruga-Bay earthquake (M6.5) in Tokai, Japan
NASA Astrophysics Data System (ADS)
Takanami, T.; Hirata, N.; Kitagawa, G.; Kamigaichi, O.; Linde, A. T.; Sacks, S. I.
2012-12-01
On 11 August 2009 the intraslab earthquake (M6.5) struck the Tokai area. The largest intensity observed was VI-in JMA scale, and it was a felt earthquake in a wide area including the Kanto and Koshin'estu Regions. Tsunamis were observed at and around the Suruga Bay. In the Tokai area, the Japan Meteorological Agency (JMA) continuously monitors strain data by the real time automated processing in the Tokai network. According to JMA, it is unconnected to the anticipated Tokai Earthquake (M8) judging from the acceptable reasons. For instance, it is an intraslab earthquake in the Philippine Sea plate, while the anticipated earthquake is a plate boundary earthquake on the upper side of the Philippine Sea plate. We consider it as an appropriate earthquake for validation of the Tokai network, though the feature of earthquake is different from one of the anticipated earthquake. We here tried to investigate the strain behavior before and after the 2009 Suruga Bay earthquake occurred in the fault zone of the anticipated Tokai earthquake. In actual, the Tokai network of strainmeters has been monitoring the short-term slow slip events (SSE) synchronized with nearby low frequency earthquakes or tremors since 2005 (Kobayashi et al., 2006). However, the earth's surface is always under the continuous influence of a variety of natural forces such as earthquakes, wave, wind, tide, air pressure, precipitation and a variety of human induced sources, which create noise when monitoring geodetic strain. Eliminating these noise inputs from the raw strain data requires proper statistical modeling, for automatic processing of geodetic strain data. It is desirable to apply the state space method to noisy Tokai strain data in order to detect precursors of the anticipated Tokai earthquake. The method is based on the general state space method, recursive filtering and smoothing algorithms (Kitagawa and Matsumoto, 1996). The first attempt to apply this method to actual strain data was made using data from the 2003 Tokachi-oki earthquake (M8.0) recorded by the Sacks-Evertson strainmeter, which has been operating since 1982 at Urakawa Seismological Observatory (KMU) of Hokkaido University in the southern part of the Hidaka Mountains (Takanami et al., 2009). KMU is far 105 km NW of the epicenter of the 2003 Tokachi-oki earthquake. After the earthquake, the data showed a clear episode of contraction for 4 days followed by expansion for 23 days. These signals correlate with increased aftershock seismicity for M≧4 events. The strain changes, together with surface displacements detected by the GPS network, are indicative of propagation of slow slip at depth (e.g. Geographical Survey Institute, 2004). We here review the computational approach to state space method and the results of its application to the strain data from the 2009 earthquakes (M6.5) occurred off Sagami in the Tokai area. Interestingly, for the 2011 Tohoku Earthquake off the Pacific coast no pre-slip was detected by land-based observations even though its magnitude was M9. In order to detect the nucleation of such an earthquake occurring far offshore, high-precision strain data is necessary but was not available.
NASA Astrophysics Data System (ADS)
Wang, Ruijia; Gu, Yu Jeffrey; Schultz, Ryan; Zhang, Miao; Kim, Ahyi
2017-08-01
On 2016 January 12, an intraplate earthquake with an initial reported local magnitude (ML) of 4.8 shook the town of Fox Creek, Alberta. While there were no reported damages, this earthquake was widely felt by the local residents and suspected to be induced by the nearby hydraulic-fracturing (HF) operations. In this study, we determine the earthquake source parameters using moment tensor inversions, and then detect and locate the associated swarm using a waveform cross-correlation based method. The broad-band seismic recordings from regional arrays suggest a moment magnitude (M) 4.1 for this event, which is the largest in Alberta in the past decade. Similar to other recent M ∼ 3 earthquakes near Fox Creek, the 2016 January 12 earthquake exhibits a dominant strike-slip (strike = 184°) mechanism with limited non-double-couple components (∼22 per cent). This resolved focal mechanism, which is also supported by forward modelling and P-wave first motion analysis, indicates an NE-SW oriented compressional axis consistent with the maximum compressive horizontal stress orientations delineated from borehole breakouts. Further detection analysis on industry-contributed recordings unveils 1108 smaller events within 3 km radius of the epicentre of the main event, showing a close spatial-temporal relation to a nearby HF well. The majority of the detected events are located above the basement, comparable to the injection depth (3.5 km) on the Duvernay shale Formation. The spatial distribution of this earthquake cluster further suggests that (1) the source of the sequence is an N-S-striking fault system and (2) these earthquakes were induced by an HF well close to but different from the well that triggered a previous (January 2015) earthquake swarm. Reactivation of pre-existing, N-S oriented faults analogous to the Pine Creek fault zone, which was reported by earlier studies of active source seismic and aeromagnetic data, are likely responsible for the occurrence of the January 2016 earthquake swarm and other recent events in the Crooked Lake area.
Choy, G.L.; Bowman, J.R.
1990-01-01
On January 22, 1988, three large intraplate earthquakes (with MS 6.3, 6.4 and 6.7) occurred within a 12-hour period near Tennant Creek, Australia. Broadband displacement and velocity records of body waves from teleseismically recorded data are analyzed to determine source mechanisms, depths, and complexity of rupture of each of the three main shocks. Hypocenters of an additional 150 foreshocks and aftershocks constrained by local arrival time data and field observations of surface rupture are used to complement the source characteristics of the main shocks. The interpretation of the combined data sets suggests that the overall rupture process involved unusually complicated stress release. Rupture characteristics suggest that substantial slow slip occurred on each of the three fault interfaces that was not accompanied by major energy release. Variation of focal depth and the strong increase of moment and radiated energy with each main shock imply that lateral variations of strength were more important than vertical gradients of shear stress in controlling the progression of rupture. -from Authors
NASA Astrophysics Data System (ADS)
Buforn, E.; Pro, C.; del Fresno, C.; Cantavella, J.; Sanz de Galdeano, C.; Udias, A.
2016-12-01
We have studied the rupture process of the 25 January 2016 earthquake (Mw =6.4) occurred in South Spain in the Alboran Sea. Main shock, foreshock and largest aftershocks (Mw =4.5) have been relocated using the NonLinLoc algorithm. Results obtained show a NE-SW distribution of foci at shallow depth (less than 15 km). For main shock, focal mechanism has been obtained from slip inversion over the rupture plane of teleseismic data, corresponding to left-lateral strike-slip motion. The rupture starts at 7 km depth and it propagates upward with a complex source time function. In order to obtain a more detailed source time function and to validate the results obtained from teleseismic data, we have used the Empirical Green Functions method (EGF) at regional distances. Finally, results of the directivity effect from teleseismic Rayleigh waves and the EGF method, are consistent with a rupture propagation to the NE. These results are interpreted in terms of the main geological features in the region.
NASA Astrophysics Data System (ADS)
Green, David N.; Neuberg, Jürgen
2006-05-01
Low-frequency volcanic earthquakes are indicators of magma transport and activity within shallow conduit systems. At a number of volcanoes, these events exhibit a high degree of waveform similarity providing a criterion for classification. Using cross-correlation techniques to quantify the degree of similarity, we develop a method to sort events into families containing comparable waveforms. Events within a family have been triggered within one small source volume from which the seismic wave has then travelled along an identical path to the receiver. This method was applied to a series of 16 low-frequency earthquake swarms, well correlated with cyclic deformation recorded by tiltmeters, at Soufrière Hills Volcano, Montserrat, in June 1997. Nine waveform groups were identified containing more than 45 events each. The families are repeated across swarms with only small changes in waveform, indicating that the seismic source location is stable with time. The low-frequency seismic swarms begin prior to the point at which inflation starts to decelerate, suggesting that the seismicity indicates or even initiates a depressurisation process. A major dome collapse occurred within the time window considered, removing the top 100 m of the dome. This event caused activity within some families to pause for several cycles before reappearing. This shows that the collapse did not permanently disrupt the source mechanism or the path of the seismic waves.
Radiation efficiency of earthquake sources at different hierarchical levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocharyan, G. G., E-mail: gevorgkidg@mail.ru; Moscow Institute of Physics and Technology
Such factors as earthquake size and its mechanism define common trends in alteration of radiation efficiency. The macroscopic parameter that controls the efficiency of a seismic source is stiffness of fault or fracture. The regularities of this parameter alteration with scale define several hierarchical levels, within which earthquake characteristics obey different laws. Small variations of physical and mechanical properties of the fault principal slip zone can lead to dramatic differences both in the amplitude of released stress and in the amount of radiated energy.
Hough, S.E.; Kanamori, H.
2002-01-01
We analyze the source properties of a sequence of triggered earthquakes that occurred near the Salton Sea in southern California in the immediate aftermath of the M 7.1 Hector Mine earthquake of 16 October 1999. The sequence produced a number of early events that were not initially located by the regional network, including two moderate earthquakes: the first within 30 sec of the P-wave arrival and a second approximately 10 minutes after the mainshock. We use available amplitude and waveform data from these events to estimate magnitudes to be approximately 4.7 and 4.4, respectively, and to obtain crude estimates of their locations. The sequence of small events following the initial M 4.7 earthquake is clustered and suggestive of a local aftershock sequence. Using both broadband TriNet data and analog data from the Southern California Seismic Network (SCSN), we also investigate the spectral characteristics of the M 4.4 event and other triggered earthquakes using empirical Green's function (EGF) analysis. We find that the source spectra of the events are consistent with expectations for tectonic (brittle shear failure) earthquakes, and infer stress drop values of 0.1 to 6 MPa for six M 2.1 to M 4.4 events. The estimated stress drop values are within the range observed for tectonic earthquakes elsewhere. They are relatively low compared to typically observed stress drop values, which is consistent with expectations for faulting in an extensional, high heat flow regime. The results therefore suggest that, at least in this case, triggered earthquakes are associated with a brittle shear failure mechanism. This further suggests that triggered earthquakes may tend to occur in geothermal-volcanic regions because shear failure occurs at, and can be triggered by, relatively low stresses in extensional regimes.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Earthquake-induced ground failures in Italy from a reviewed database
NASA Astrophysics Data System (ADS)
Martino, S.; Prestininzi, A.; Romeo, R. W.
2014-04-01
A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground changes triggered by earthquakes of Mercalli epicentral intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (http://www.ceri.uniroma1.it/cn/gis.jsp ) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the Sapienza University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.
NASA Astrophysics Data System (ADS)
Meng, Lingsen; Zhang, Ailin; Yagi, Yuji
2016-01-01
The 2015 Mw 7.8 Nepal-Gorkha earthquake with casualties of over 9000 people was the most devastating disaster to strike Nepal since the 1934 Nepal-Bihar earthquake. Its rupture process was imaged by teleseismic back projections (BP) of seismograms recorded by three, large regional networks in Australia, North America, and Europe. The source images of all three arrays reveal a unilateral eastward rupture; however, the propagation directions and speeds differ significantly between the arrays. To understand the spatial uncertainties of the BP analyses, we analyze four moderate size aftershocks recorded by all three arrays exactly as had been conducted for the main shock. The apparent source locations inferred from BPs are systematically biased from the catalog locations, as a result of a slowness error caused by three-dimensional Earth structures. We introduce a physics-based slowness correction that successfully mitigates the source location discrepancies among the arrays. Our calibrated BPs are found to be mutually consistent and reveal a unilateral rupture propagating eastward at a speed of 2.7 km/s, localized in a relatively narrow and deep swath along the downdip edge of the locked Himalayan thrust zone. We find that the 2015 Gorkha earthquake was a localized rupture that failed to break the entire Himalayan décollement to the surface, which can be regarded as an intermediate event during the interseismic period of larger Himalayan ruptures that break the whole seismogenic zone width. Thus, our physics-based slowness correction is an important technical improvement of BP, mitigating spatial uncertainties and improving the robustness of single and multiarray studies.
Discrimination between pre-seismic electromagnetic anomalies and solar activity effects
NASA Astrophysics Data System (ADS)
Koulouras, G.; Balasis, G.; Kiourktsidis, I.; Nannos, E.; Kontakos, K.; Stonham, J.; Ruzhin, Y.; Eftaxias, K.; Cavouras, D.; Nomicos, C.
2009-04-01
Laboratory studies suggest that electromagnetic emissions in a wide frequency spectrum ranging from kilohertz (kHz) to very high megahertz (MHz) frequencies are produced by the opening of microcracks, with the MHz radiation appearing earlier than the kHz radiation. Earthquakes are large-scale fracture phenomena in the Earth's heterogeneous crust. Thus, the radiated kHz-MHz electromagnetic emissions are detectable not only in the laboratory but also at a geological scale. Clear MHz-to-kHz electromagnetic anomalies have been systematically detected over periods ranging from a few days to a few hours prior to recent destructive earthquakes in Greece. We should bear in mind that whether electromagnetic precursors to earthquakes exist is an important question not only for earthquake prediction but mainly for understanding the physical processes of earthquake generation. An open question in this field of research is the classification of a detected electromagnetic anomaly as a pre-seismic signal associated with earthquake occurrence. Indeed, electromagnetic fluctuations in the frequency range of MHz are known to be related to a few sources, including atmospheric noise (due to lightning), man-made composite noise, solar-terrestrial noise (resulting from the Sun-solar wind-magnetosphere-ionosphere-Earth's surface chain) or cosmic noise, and finally, the lithospheric effect, namely pre-seismic activity. We focus on this point in this paper. We suggest that if a combination of detected kHz and MHz electromagnetic anomalies satisfies the set of criteria presented herein, these anomalies could be considered as candidate precursory phenomena of an impending earthquake.
Discrimination between preseismic electromagnetic anomalies and solar activity effects
NASA Astrophysics Data System (ADS)
Koulouras, Gr; Balasis, G.; Kontakos, K.; Ruzhin, Y.; Avgoustis, G.; Kavouras, D.; Nomicos, C.
2009-04-01
Laboratory studies suggest that electromagnetic emissions in a wide frequency spectrum ranging from kHz to very high MHz frequencies are produced by the opening of microcracks, with the MHz radiation appearing earlier than the kHz radiation. Earthquakes are large-scale fracture phenomena in the Earth's heterogeneous crust. Thus, the radiated kHz-MHz electromagnetic emissions are detectable not only at laboratory but also at geological scale. Clear MHz-to-kHz electromagnetic anomalies have been systematically detected over periods ranging from a few days to a few hours prior to recent destructive earthquakes in Greece. We bear in mind that whether electromagnetic precursors to earthquakes exist is an important question not only for earthquake prediction but mainly for understanding the physical processes of earthquake generation. An open question in this field of research is the classification of a detected electromagnetic anomaly as a pre-seismic signal associated to earthquake occurrence. Indeed, electromagnetic fluctuations in the frequency range of MHz are known to related to a few sources, i.e., they might be atmospheric noise (due to lightning), man-made composite noise, solar-terrestrial noise (resulting from the Sun-solar wind-magnetosphere-ionosphere-Earth's surface chain) or cosmic noise, and finally, lithospheric effect, namely pre-seismic activity. We focus on this point. We suggest that if a combination of detected kHz and MHz electromagnetic anomalies satisfies the herein presented set of criteria these anomalies could be considered as candidate precursory phenomena of an impending earthquake.
NASA Astrophysics Data System (ADS)
Miura, S.; Ohta, Y.; Ohzono, M.; Kita, S.; Iinuma, T.; Demachi, T.; Tachibana, K.; Nakayama, T.; Hirahara, S.; Suzuki, S.; Sato, T.; Uchida, N.; Hasegawa, A.; Umino, N.
2011-12-01
We propose a source fault model of the large intraslab earthquake with M7.1 deduced from a dense GPS network. The coseismic displacements obtained by GPS data analysis clearly show the spatial pattern specific to intraslab earthquakes not only in the horizontal components but also the vertical ones. A rectangular fault with uniform slip was estimated by a non-linear inversion approach. The results indicate that the simple rectangular fault model can explain the overall features of the observations. The amount of moment released is equivalent to Mw 7.17. The hypocenter depth of the main shock estimated by the Japan Meteorological Agency is slightly deeper than the neutral plane between down-dip compression (DC) and down-dip extension (DE) stress zones of the double-planed seismic zone. This suggests that the depth of the neutral plane was deepened by the huge slip of the 2011 M9.0 Tohoku earthquake, and the rupture of the thrust M7.1 earthquake was initiated at that depth, although more investigations are required to confirm this idea. The estimated fault plane has an angle of ~60 degrees from the surface of subducting Pacific plate. It is consistent with the hypothesis that intraslab earthquakes are thought to be reactivation of the preexisting hydrated weak zones made in bending process of oceanic plates around outer-rise regions.
Foulger, G.R.; Julian, B.R.; Hill, D.P.; Pitt, A.M.; Malin, P.E.; Shalev, E.
2004-01-01
Most of 26 small (0.4??? M ???3.1) microearthquakes at Long Valley caldera in mid-1997, analyzed using data from a dense temporary network of 69 digital three-component seismometers, have significantly non-double-couple focal mechanisms, inconsistent with simple shear faulting. We determined their mechanisms by inverting P - and S -wave polarities and amplitude ratios using linear-programming methods, and tracing rays through a three-dimensional Earth model derived using tomography. More than 80% of the mechanisms have positive (volume increase) isotropic components and most have compensated linear-vector dipole components with outward-directed major dipoles. The simplest interpretation of these mechanisms is combined shear and extensional faulting with a volume-compensating process, such as rapid flow of water, steam, or CO2 into opening tensile cracks. Source orientations of earthquakes in the south moat suggest extensional faulting on ESE-striking subvertical planes, an orientation consistent with planes defined by earthquake hypocenters. The focal mechanisms show that clearly defined hypocentral planes in different locations result from different source processes. One such plane in the eastern south moat is consistent with extensional faulting, while one near Casa Diablo Hot Springs reflects en echelon right-lateral shear faulting. Source orientations at Mammoth Mountain vary systematically with location, indicating that the volcano influences the local stress field. Events in a 'spasmodic burst' at Mammoth Mountain have practically identical mechanisms that indicate nearly pure compensated tensile failure and high fluid mobility. Five earthquakes had mechanisms involving small volume decreases, but these may not be significant. No mechanisms have volumetric moment fractions larger than that of a force dipole, but the reason for this fact is unknown. Published by Elsevier B.V.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Relating stress models of magma emplacement to volcano-tectonic earthquakes
NASA Astrophysics Data System (ADS)
Vargas-Bracamontes, D.; Neuberg, J.
2007-12-01
Among the various types of seismic signals linked to volcanic processes, volcano-tectonic earthquakes are probably the earliest precursors of volcanic eruptions. Understanding their relationship with magma emplacement can provide insight into the mechanisms of magma transport at depth and assist in the ultimate goal of forecasting eruptions. Volcano-tectonic events have been observed to occur on faults that experience increases in Coulomb stress changes as the result of magma intrusions. To simulate stress changes associated with magmatic injections, we test different models of volcanic sources in an elastic half-space. For each source model, we look at several aspects that influence the stress conditions of the magmatic system such as the regional tectonic setting, the effect of varying the elastic parameters of the media, the evolution of the magma with time, as well as the volume and rheology of the ascending magma.
Implications on 1+1 D runup modeling due to time features of the earthquake source
NASA Astrophysics Data System (ADS)
Fuentes, M.; Riquelme, S.; Campos, J. A.
2017-12-01
The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1+1D solution for the shoreline motion time series, from the static case to the dynamic case, by including both, rise time and rupture velocity. Results show that the static case correspond to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum run-up may be affected by very slow ruptures and long rise time. The analytical solution has been tested for the Nicaraguan tsunami earthquake, suggesting that the rupture was not slow enough to cause wave amplification to explain the high runup observations.
GPS source solution of the 2004 Parkfield earthquake.
Houlié, N; Dreger, D; Kim, A
2014-01-17
We compute a series of finite-source parameter inversions of the fault rupture of the 2004 Parkfield earthquake based on 1 Hz GPS records only. We confirm that some of the co-seismic slip at shallow depth (<5 km) constrained by InSAR data processing results from early post-seismic deformation. We also show 1) that if located very close to the rupture, a GPS receiver can saturate while it remains possible to estimate the ground velocity (~1.2 m/s) near the fault, 2) that GPS waveforms inversions constrain that the slip distribution at depth even when GPS monuments are not located directly above the ruptured areas and 3) the slip distribution at depth from our best models agree with that recovered from strong motion data. The 95(th) percentile of the slip amplitudes for rupture velocities ranging from 2 to 5 km/s is ~55 ± 6 cm.
GPS source solution of the 2004 Parkfield earthquake
Houlié, N.; Dreger, D.; Kim, A.
2014-01-01
We compute a series of finite-source parameter inversions of the fault rupture of the 2004 Parkfield earthquake based on 1 Hz GPS records only. We confirm that some of the co-seismic slip at shallow depth (<5 km) constrained by InSAR data processing results from early post-seismic deformation. We also show 1) that if located very close to the rupture, a GPS receiver can saturate while it remains possible to estimate the ground velocity (~1.2 m/s) near the fault, 2) that GPS waveforms inversions constrain that the slip distribution at depth even when GPS monuments are not located directly above the ruptured areas and 3) the slip distribution at depth from our best models agree with that recovered from strong motion data. The 95th percentile of the slip amplitudes for rupture velocities ranging from 2 to 5 km/s is ~55 ± 6 cm. PMID:24434939
NASA Astrophysics Data System (ADS)
Anisya; Yoga Swara, Ganda
2017-12-01
Padang is one of the cities prone to earthquake disaster with tsunami due to its position at the meeting of two active plates, this is, a source of potentially powerful earthquake and tsunami. Central government and most offices are located in the red zone (vulnerable areas), it will also affect the evacuation of the population during the earthquake and tsunami disaster. In this study, researchers produced a system of search nearest shelter using best-first-search method. This method uses the heuristic function, the amount of cost taken and the estimated value or travel time, path length and population density. To calculate the length of the path, researchers used method of haversine formula. The value obtained from the calculation process is implemented on a web-based system. Some alternative paths and some of the closest shelters will be displayed in the system.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Energy Partition and Variability of Earthquakes
NASA Astrophysics Data System (ADS)
Kanamori, H.
2003-12-01
During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy mechanically dissipated during faulting. In the context of the slip-weakening model, EG can be estimated from Δ W0 and ER. Alternatively, EG can be estimated from the laboratory data on the surface energy, the grain size and the total volume of newly formed fault gouge. This method suggests that, for crustal earthquakes, EG/E_R is very small, less than 0.2 even for extreme cases, for earthquakes with MW>7. This is consistent with the EG estimated with seismological methods, and the fast rupture speeds during most large earthquakes. For shallow subduction-zone earthquakes, EG/E_R varies substantially depending on the tectonic environments. EH: Direct estimation of EH is difficult. However, even with modest friction, EH can be very large, enough to melt or even dissociate a significant amount of material near the slip zone for large events with large slip, and the associated thermal effects may have significant effects on fault dynamics. The energy partition varies significantly for different types of earthquakes, e.g. large earthquakes on mature faults, large earthquakes on faults with low slip rates, subduction-zone earthquakes, deep focus earthquakes etc; this variability manifests itself in the difference in the evolution of seismic slip pattern. The different behaviors will be illustrated using the examples for large earthquakes, including, the 2001 Kunlun, the 1998 Balleny Is., the 1994 Bolivia, the 2001 India earthquake, the 1999 Chi-Chi, and the 2002 Denali earthquakes.
NASA Astrophysics Data System (ADS)
Cruz, H.; Furumura, T.; Chavez-Garcia, F. J.
2002-12-01
The estimation of scenarios of the strong ground motions caused by future great earthquakes is an important problem in strong motion seismology. This was pointed out by the great 1985 Michoacan earthquake, which caused a great damage in Mexico City, 300 km away from the epicenter. Since the seismic wavefield is characterized by the source, path and site effects, the pattern of strong motion damage from different types of earthquakes should differ significantly. In this study, the scenarios for intermediate-depth normal-faulting, shallow-interplate thrust faulting, and crustal earthquakes have been estimated using a hybrid simulation technique. The character of the seismic wavefield propagating from the source to Mexico City for each earthquake was first calculated using the pseudospectral method for 2D SH waves. The site amplifications in the shallow structure of Mexico City are then calculated using the multiple SH wave reverberation theory. The scenarios of maximum ground motion for both inslab and interplate earthquakes obtained by the simulation show a good agreement with the observations. This indicates the effectiveness of the hybrid simulation approach to investigate the strong motion damage for future earthquakes.
Studies of earthquakes stress drops, seismic scattering, and dynamic triggering in North America
NASA Astrophysics Data System (ADS)
Escudero Ayala, Christian Rene
I use the Relative Source Time Function (RSTF) method to determine the source properties of earthquakes within southeastern Alaska-northwestern Canada in a first part of the project, and earthquakes within the Denali fault in a second part. I deconvolve a small event P-arrival signal from a larger event by the following method: select arrivals with a tapered cosine window, fast fourier transform to obtain the spectrum, apply water level deconvolution technique, and bandpass filter before inverse transforming the result to obtain the RSTF. I compare the source processes of earthquakes within the area to determine stress drop differences to determine their relation with the tectonic setting of the earthquakes location. Results show an consistency with previous results, stress drop independent of moment implying self-similarity, correlation of stress drop with tectonic regime, stress drop independent of depth, stress drop depends of focal mechanism where strike-slip present larger stress drops, and decreasing stress drop as function of time. I determine seismic wave attenuation in the central western United States using coda waves. I select approximately 40 moderate earthquakes (magnitude between 5.5 and 6.5) located alocated along the California-Baja California, California-Nevada, Eastern Idaho, Gulf of California, Hebgen Lake, Montana, Nevada, New Mexico, off coast of Northern California, off coast of Oregon, southern California, southern Illinois, Vancouver Island, Washington, and Wyoming regions. These events were recorded by the EarthScope transportable array (TA) network from 2005 to 2009. We obtain the data from the Incorporated Research Institutions for Seismology (IRIS). In this study we implement a method based on the assumption that coda waves are single backscattered waves from randomly distributed heterogeneities to calculate the coda Q. The frequencies studied lie between 1 and 15 Hz. The scattering attenuation is calculated for frequency bands centered at 1.5, 3, 5, 7.5, 10.5, and 13.5 Hz. Coda Q present a great correlation with tectonic and geology setting, as well as the crustal thickness. I analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. I use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw ¿ 7) earthquakes, I first identify local earthquakes that occurred before and after the mainshocks. I then group the local earthquakes within a cluster radius between 75 to 200 km. I obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Based on the lateral variations of the dip along the subducted oceanic plate, I divide the Mexican subduction zone into four segments. I then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes passage of surface waves. I identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. I find no dependence of seismicity changes on mainshock focal mechanism.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Source characterization and dynamic fault modeling of induced seismicity
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Young, R. P.
2017-12-01
In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-05-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth ( H = 19 km), the seismic moment ( M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism ( φ = 280°, δ = 14°, λ = 84°), the source radius ( a = 1.3 km), and the static stress drop (Δ σ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q( f) = 500 f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δ σ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.
2012-12-01
We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-02-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth (H = 19 km), the seismic moment (M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism (φ = 280°, δ = 14°, λ = 84°), the source radius (a = 1.3 km), and the static stress drop (Δσ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q(f) = 500f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δσ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Oda, Takuma; Nakamura, Mamoru
2017-09-01
We estimated the location and magnitude of earthquakes constituting the 1858 earthquake swarm in the central Ryukyu Islands using the felt earthquakes recorded by Father Louis Furet who lived in Naha, Okinawa Island, in the middle of the nineteenth century. First, we estimated the JMA seismic intensity of the earthquakes by interpreting the words used to describe the shaking. Next, using the seismic intensity and shaking duration of the felt earthquakes, we estimated the epicentral distance and magnitude range of three earthquakes in the swarm. The results showed that the epicentral distances of the earthquakes were 20-250 km and that magnitudes ranged between 4.5 and 6.5, with a strong correlation between epicentral distance and magnitude. Since the rumblings accompanying some earthquakes in the swarm were heard from a northward direction, the swarm probably occurred to the north of Naha. The most likely source area for the 1858 swarm is the central Okinawa Trough, where a similar swarm event occurred in 1980. If the 1858 swarm occurred in the central Okinawa Trough, the estimated maximum magnitude would have reached 6-7. In contrast, if the 1858 swarm occurred in the vicinity of Amami Island, which is the second most likely candidate area, it would have produced a cluster of magnitude 7-8 earthquakes.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Inoue, N.
2017-12-01
The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source fault would be performed in order to examine the amount of the displacement and conditional probability quantitatively.
Seismotectonic Map of Afghanistan and Adjacent Areas
Wheeler, Russell L.; Rukstales, Kenneth S.
2007-01-01
Introduction This map is part of an assessment of Afghanistan's geology, natural resources, and natural hazards. One of the natural hazards is from earthquake shaking. One of the tools required to address the shaking hazard is a probabilistic seismic-hazard map, which was made separately. The information on this seismotectonic map has been used in the design and computation of the hazard map. A seismotectonic map like this one shows geological, seismological, and other information that previously had been scattered among many sources. The compilation can show spatial relations that might not have been seen by comparing the original sources, and it can suggest hypotheses that might not have occurred to persons who studied those scattered sources. The main map shows faults and earthquakes of Afghanistan. Plate convergence drives the deformations that cause the earthquakes. Accordingly, smaller maps and text explain the modern plate-tectonic setting of Afghanistan and its evolution, and relate both to patterns of faults and earthquakes.
Keefer, D.K.
2000-01-01
The 1989 Loma Prieta, California earthquake (moment magnitude, M=6.9) generated landslides throughout an area of about 15,000 km2 in central California. Most of these landslides occurred in an area of about 2000 km2 in the mountainous terrain around the epicenter, where they were mapped during field investigations immediately following the earthquake. The distribution of these landslides is investigated statistically, using regression and one-way analysisof variance (ANOVA) techniques to determine how the occurrence of landslides correlates with distance from the earthquake source, slope steepness, and rock type. The landslide concentration (defined as the number of landslide sources per unit area) has a strong inverse correlation with distance from the earthquake source and a strong positive correlation with slope steepness. The landslide concentration differs substantially among the various geologic units in the area. The differences correlate to some degree with differences in lithology and degree of induration, but this correlation is less clear, suggesting a more complex relationship between landslide occurrence and rock properties. ?? 2000 Elsevier Science B.V. All rights reserved.
The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry
NASA Astrophysics Data System (ADS)
Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang
2018-05-01
Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.
Seismic Source Scaling and Discrimination in Diverse Tectonic Environments
2009-09-30
3349-3352. Imanishi, K., W. L. Ellsworth, and S. G. Prejean (2004). Earthquake source parameters determined by the SAFOD Pilot Hole seismic array ... seismic discrimination by performing a thorough investigation of* earthquake source scaling using diverse, high-quality datascts from varied tectonic...these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity
The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Vassiliou, M. S.
1983-01-01
Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.
NASA Astrophysics Data System (ADS)
Liu, Chengli; Zheng, Yong; Wang, Rongjiang; Xiong, Xiong
2015-08-01
On 2014 April 1, a magnitude Mw 8.1 interplate thrust earthquake ruptured a densely instrumented region of Iquique seismic gap in northern Chile. The abundant data sets near and around the rupture zone provide a unique opportunity to study the detailed source process of this megathrust earthquake. We retrieved the spatial and temporal distributions of slip during the main shock and one strong aftershock through a joint inversion of teleseismic records, GPS offsets and strong motion data. The main shock rupture initiated at a focal depth of about 25 km and propagated around the hypocentre. The peak slip amplitude in the model is ˜6.5 m, located in the southeast of the hypocentre. The major slip patch is located around the hypocentre, spanning ˜150 km along dip and ˜160 km along strike. The associated static stress drop is ˜3 MPa. Most of the seismic moment was released within 150 s. The total seismic moment of our preferred model is 1.72 × 1021 N m, equivalent to Mw 8.1. For the strong aftershock on 2014 April 3, the slip mainly occurred in a relatively compact area, and the major slip area surrounded the hypocentre with the peak amplitude of ˜2.5 m. There is a secondary slip patch located downdip from the hypocentre with the peak slip of ˜2.1 m. The total seismic moment is about 3.9 × 1020 N m, equivalent to Mw 7.7. Between the rupture areas of the main shock and the 2007 November 14 Mw 7.7 Antofagasta, Chile earthquake, there is an earthquake vacant zone with a total length of about 150 km. Historically, if there is no big earthquake or obvious aseismic creep occurring in this area, it has a great potential of generating strong earthquakes with magnitude larger than Mw 7.0 in the future.
Automatic classification of seismic events within a regional seismograph network
NASA Astrophysics Data System (ADS)
Tiira, Timo; Kortström, Jari; Uski, Marja
2015-04-01
A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.