Physics-Based Hazard Assessment for Critical Structures Near Large Earthquake Sources
NASA Astrophysics Data System (ADS)
Hutchings, L.; Mert, A.; Fahjan, Y.; Novikova, T.; Golara, A.; Miah, M.; Fergany, E.; Foxall, W.
2017-09-01
We argue that for critical structures near large earthquake sources: (1) the ergodic assumption, recent history, and simplified descriptions of the hazard are not appropriate to rely on for earthquake ground motion prediction and can lead to a mis-estimation of the hazard and risk to structures; (2) a physics-based approach can address these issues; (3) a physics-based source model must be provided to generate realistic phasing effects from finite rupture and model near-source ground motion correctly; (4) wave propagations and site response should be site specific; (5) a much wider search of possible sources of ground motion can be achieved computationally with a physics-based approach; (6) unless one utilizes a physics-based approach, the hazard and risk to structures has unknown uncertainties; (7) uncertainties can be reduced with a physics-based approach, but not with an ergodic approach; (8) computational power and computer codes have advanced to the point that risk to structures can be calculated directly from source and site-specific ground motions. Spanning the variability of potential ground motion in a predictive situation is especially difficult for near-source areas, but that is the distance at which the hazard is the greatest. The basis of a "physical-based" approach is ground-motion syntheses derived from physics and an understanding of the earthquake process. This is an overview paper and results from previous studies are used to make the case for these conclusions. Our premise is that 50 years of strong motion records is insufficient to capture all possible ranges of site and propagation path conditions, rupture processes, and spatial geometric relationships between source and site. Predicting future earthquake scenarios is necessary; models that have little or no physical basis but have been tested and adjusted to fit available observations can only "predict" what happened in the past, which should be considered description as opposed to prediction. We have developed a methodology for synthesizing physics-based broadband ground motion that incorporates the effects of realistic earthquake rupture along specific faults and the actual geology between the source and site.
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie
2017-04-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
NASA Astrophysics Data System (ADS)
Gabriel, A. A.; Madden, E. H.; Ulrich, T.; Wollherr, S.
2016-12-01
Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.
NASA Astrophysics Data System (ADS)
Mert, A.
2016-12-01
The main motivation of this study is the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in Marmara Sea and the disaster risk around Marmara region, especially in İstanbul. This study provides the results of a physically-based Probabilistic Seismic Hazard Analysis (PSHA) methodology, using broad-band strong ground motion simulations, for sites within the Marmara region, Turkey, due to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically-based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We include the effects of all considerable magnitude earthquakes. To generate the high frequency (0.5-20 Hz) part of the broadband earthquake simulation, the real small magnitude earthquakes recorded by local seismic array are used as an Empirical Green's Functions (EGF). For the frequencies below 0.5 Hz the simulations are obtained using by Synthetic Green's Functions (SGF) which are synthetic seismograms calculated by an explicit 2D/3D elastic finite difference wave propagation routine. Using by a range of rupture scenarios for all considerable magnitude earthquakes throughout the PIF segments we provide a hazard calculation for frequencies 0.1-20 Hz. Physically based PSHA used here follows the same procedure of conventional PSHA except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes and this approach utilizes full rupture of earthquakes along faults. Further, conventional PSHA predicts ground-motion parameters using by empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitude earthquakes to obtain ground-motion parameters. PSHA results are produced for 2%, 10% and 50% hazards for all studied sites in Marmara Region.
NASA Astrophysics Data System (ADS)
Bahng, B.; Whitmore, P.; Macpherson, K. A.; Knight, W. R.
2016-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes or other mechanisms in either the Pacific Ocean, Atlantic Ocean or Gulf of Mexico. At the U.S. National Tsunami Warning Center (NTWC), the use of the model has been mainly for tsunami pre-computation due to earthquakes. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. The model has also been used for tsunami hindcasting due to submarine landslides and due to atmospheric pressure jumps, but in a very case-specific and somewhat limited manner. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves approach coastal waters. The shallow-water wave physics is readily applicable to all of the above tsunamis as well as to tides. Recently, the model has been expanded to include multiple forcing mechanisms in a systematic fashion, and to enhance the model physics for non-earthquake events.ATFM is now able to handle multiple source mechanisms, either individually or jointly, which include earthquake, submarine landslide, meteo-tsunami and tidal forcing. As for earthquakes, the source can be a single unit source or multiple, interacting source blocks. Horizontal slip contribution can be added to the sea-floor displacement. The model now includes submarine landslide physics, modeling the source either as a rigid slump, or as a viscous fluid. Additional shallow-water physics have been implemented for the viscous submarine landslides. With rigid slumping, any trajectory can be followed. As for meteo-tsunami, the forcing mechanism is capable of following any trajectory shape. Wind stress physics has also been implemented for the meteo-tsunami case, if required. As an example of multiple sources, a near-field model of the tsunami produced by a combination of earthquake and submarine landslide forcing which happened in Papua New Guinea on July 17, 1998 is provided.
Local tsunamis and earthquake source parameters
Geist, Eric L.; Dmowska, Renata; Saltzman, Barry
1999-01-01
This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.
NASA Astrophysics Data System (ADS)
Mert, Aydin; Fahjan, Yasin M.; Hutchings, Lawrence J.; Pınar, Ali
2016-08-01
The main motivation for this study was the impending occurrence of a catastrophic earthquake along the Prince Island Fault (PIF) in the Marmara Sea and the disaster risk around the Marmara region, especially in Istanbul. This study provides the results of a physically based probabilistic seismic hazard analysis (PSHA) methodology, using broadband strong ground motion simulations, for sites within the Marmara region, Turkey, that may be vulnerable to possible large earthquakes throughout the PIF segments in the Marmara Sea. The methodology is called physically based because it depends on the physical processes of earthquake rupture and wave propagation to simulate earthquake ground motion time histories. We included the effects of all considerable-magnitude earthquakes. To generate the high-frequency (0.5-20 Hz) part of the broadband earthquake simulation, real, small-magnitude earthquakes recorded by a local seismic array were used as empirical Green's functions. For the frequencies below 0.5 Hz, the simulations were obtained by using synthetic Green's functions, which are synthetic seismograms calculated by an explicit 2D /3D elastic finite difference wave propagation routine. By using a range of rupture scenarios for all considerable-magnitude earthquakes throughout the PIF segments, we produced a hazard calculation for frequencies of 0.1-20 Hz. The physically based PSHA used here followed the same procedure as conventional PSHA, except that conventional PSHA utilizes point sources or a series of point sources to represent earthquakes, and this approach utilizes the full rupture of earthquakes along faults. Furthermore, conventional PSHA predicts ground motion parameters by using empirical attenuation relationships, whereas this approach calculates synthetic seismograms for all magnitudes of earthquakes to obtain ground motion parameters. PSHA results were produced for 2, 10, and 50 % hazards for all sites studied in the Marmara region.
Source characterization and dynamic fault modeling of induced seismicity
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Young, R. P.
2017-12-01
In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.
2017-06-01
Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher resolution over a larger area than either data source alone.
New ideas about the physics of earthquakes
NASA Astrophysics Data System (ADS)
Rundle, John B.; Klein, William
1995-07-01
It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.
Rapid earthquake hazard and loss assessment for Euro-Mediterranean region
NASA Astrophysics Data System (ADS)
Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru
2010-10-01
The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.
Radiation efficiency of earthquake sources at different hierarchical levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocharyan, G. G., E-mail: gevorgkidg@mail.ru; Moscow Institute of Physics and Technology
Such factors as earthquake size and its mechanism define common trends in alteration of radiation efficiency. The macroscopic parameter that controls the efficiency of a seismic source is stiffness of fault or fracture. The regularities of this parameter alteration with scale define several hierarchical levels, within which earthquake characteristics obey different laws. Small variations of physical and mechanical properties of the fault principal slip zone can lead to dramatic differences both in the amplitude of released stress and in the amount of radiated energy.
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
NASA Astrophysics Data System (ADS)
Fan, Wenyuan; McGuire, Jeffrey J.
2018-05-01
An earthquake rupture process can be kinematically described by rupture velocity, duration and spatial extent. These key kinematic source parameters provide important constraints on earthquake physics and rupture dynamics. In particular, core questions in earthquake science can be addressed once these properties of small earthquakes are well resolved. However, these parameters of small earthquakes are poorly understood, often limited by available datasets and methodologies. The IRIS Community Wavefield Experiment in Oklahoma deployed ˜350 three component nodal stations within 40 km2 for a month, offering an unprecedented opportunity to test new methodologies for resolving small earthquake finite source properties in high resolution. In this study, we demonstrate the power of the nodal dataset to resolve the variations in the seismic wavefield over the focal sphere due to the finite source attributes of a M2 earthquake within the array. The dense coverage allows us to tightly constrain rupture area using the second moment method even for such a small earthquake. The M2 earthquake was a strike-slip event and unilaterally propagated towards the surface at 90 per cent local S- wave speed (2.93 km s-1). The earthquake lasted ˜0.019 s and ruptured Lc ˜70 m by Wc ˜45 m. With the resolved rupture area, the stress-drop of the earthquake is estimated as 7.3 MPa for Mw 2.3. We demonstrate that the maximum and minimum bounds on rupture area are within a factor of two, much lower than typical stress drop uncertainty, despite a suboptimal station distribution. The rupture properties suggest that there is little difference between the M2 Oklahoma earthquake and typical large earthquakes. The new three component nodal systems have great potential for improving the resolution of studies of earthquake source properties.
Magnitude, moment, and measurement: The seismic mechanism controversy and its resolution.
Miyake, Teru
This paper examines the history of two related problems concerning earthquakes, and the way in which a theoretical advance was involved in their resolution. The first problem is the development of a physical, as opposed to empirical, scale for measuring the size of earthquakes. The second problem is that of understanding what happens at the source of an earthquake. There was a controversy about what the proper model for the seismic source mechanism is, which was finally resolved through advances in the theory of elastic dislocations. These two problems are linked, because the development of a physically-based magnitude scale requires an understanding of what goes on at the seismic source. I will show how the theoretical advances allowed seismologists to re-frame the questions they were trying to answer, so that the data they gathered could be brought to bear on the problem of seismic sources in new ways. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Trugman, Daniel T.; Shearer, Peter M.
2017-04-01
Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.
Connecting slow earthquakes to huge earthquakes.
Obara, Kazushige; Kato, Aitaro
2016-07-15
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
NASA Astrophysics Data System (ADS)
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.
2015-12-01
Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate the count and density of Tweets with intensity level and show the importance of processing combined data sources at the earliest time stages after earthquake happens. This method can supplement existing approaches of intensity level detection, especially in the regions with high number of Twitter users and low density of seismic networks.
Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering
Geist, Eric L.
2012-01-01
Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.
NASA Astrophysics Data System (ADS)
Ohnaka, M.
2004-12-01
For the past four decades, great progress has been made in understanding earthquake source processes. In particular, recent progress in the field of the physics of earthquakes has contributed substantially to unraveling the earthquake generation process in quantitative terms. Yet, a fundamental problem remains unresolved in this field. The constitutive law that governs the behavior of earthquake ruptures is the basis of earthquake physics, and the governing law plays a fundamental role in accounting for the entire process of an earthquake rupture, from its nucleation to the dynamic propagation to its arrest, quantitatively in a unified and consistent manner. Therefore, without establishing the rational constitutive law, the physics of earthquakes cannot be a quantitative science in a true sense, and hence it is urgent to establish the rational constitutive law. However, it has been controversial over the past two decades, and it is still controversial, what the constitutive law for earthquake ruptures ought to be, and how it should be formulated. To resolve the controversy is a necessary step towards a more complete, unified theory of earthquake physics, and now the time is ripe to do so. Because of its fundamental importance, we have to discuss thoroughly and rigorously what the constitutive law ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid evidence. There are prerequisites for the constitutive formulation. The brittle, seismogenic layer and individual faults therein are characterized by inhomogeneity, and fault inhomogeneity has profound implications for earthquake ruptures. In addition, rupture phenomena including earthquakes are inherently scale dependent; indeed, some of the physical quantities inherent in rupture exhibit scale dependence. To treat scale-dependent physical quantities inherent in the rupture over a broad scale range quantitatively in a unified and consistent manner, it is critical to formulate the governing law properly so as to incorporate the scaling property. Thus, the properties of fault inhomogeneity and physical scaling are indispensable prerequisites to be incorporated into the constitutive formulation. Thorough discussion in this context necessarily leads to the consistent conclusion that the constitutive law must be formulated in such a manner that the shear traction is a primary function of the slip displacement, with the secondary effect of slip rate or stationary contact time. This constitutive formulation makes it possible to account for the entire process of an earthquake rupture over a broad scale range quantitatively in a unified and consistent manner.
Infrasound Signal Characteristics from Small Earthquakes
2011-09-01
INFRASOUND SIGNAL CHARACTERISTICS FROM SMALL EARTHQUAKES Stephen J. Arrowsmith1, J. Mark Hale2, Relu Burlacu2, Kristine L. Pankow2, Brian W. Stump3...ABSTRACT Physical insight into source properties that contribute to the generation of infrasound signals is critical to understanding the...m, with one element being co-located with a seismic station. One of the goals of this project is the recording of infrasound from earthquakes of
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
Surface Rupture Effects on Earthquake Moment-Area Scaling Relations
NASA Astrophysics Data System (ADS)
Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro
2017-09-01
Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
NASA Astrophysics Data System (ADS)
Farge, G.; Shapiro, N.; Frank, W.; Mercury, N.; Vilotte, J. P.
2017-12-01
Low frequency earthquakes (LFE) are detected in association with volcanic and tectonic tremor signals as impulsive, repeated, low frequency (1-5 Hz) events originating from localized sources. While the mechanism causing this depletion of the high frequency content of their signal is still unknown, this feature may indicate that the source processes at the origin of LFE are different from those for regular earthquakes. Tectonic LFE are often associated with slip instabilities in the brittle-ductile transition zones of active faults and volcanic LFE with fluid transport in magmatic and hydrothermal systems. Key constraints on the LFE-generating physical mechanisms can be obtained by establishing scaling laws between their sizes and durations. We apply a simple spectral analysis method to the S-waveforms of each LFE to retrieve its seismic moment and corner frequency. The former characterizes the earthquake's size while the latter is inversely proportional to its duration. First, we analyze a selection of tectonic LFE from the Mexican "Sweet Spot" (Guerrero, Mexico). We find characteristic values of M ˜ 1013 N.m (Mw ˜ 2.6) and fc ˜ 2 Hz. The moment-corner frequency distribution compared to values reported in previous studies in tectonic contexts is consistent with the scaling law suggested by Bostock et al. (2015): fc ˜ M-1/10 . We then apply the same source- parameters determination method to deep volcanic LFE detected in the Klyuchevskoy volcanic group in Kamtchatka, Russia. While the seismic moments for these earthquakes are slightly smaller, they still approximately follow the fc ˜ M-1/10 scaling. This size-duration scaling observed for LFE is very different from the one established for regular earthquakes (fc ˜ M-1/3) and from the scaling more recently suggested by Ide et al. (2007) for the broad class of "slow earthquakes". The scaling observed for LFE suggests that they are generated by sources of nearly constant size with strongly varying intensities. LFE then do not exhibit the self-similarity characteristic of regular earthquakes, strongly suggesting that the physical mechanisms at their origin are different. Moreover, the agreement with the size-duration scaling for both tectonic and volcanic LFE might indicate a similarity in their source behavior.
Near-field observations of microearthquake source physics using dense array
NASA Astrophysics Data System (ADS)
Chen, X.; Nakata, N.; Abercrombie, R. E.
2017-12-01
The recorded waveform includes contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of near-site attenuation effects. Fortunately, this problem can be remedied by dense near-field recordings at high frequency, and large databases with wide magnitude range. The 2016 IRIS wavefield experiment provides high-quality recordings of earthquake sequences in north-central Oklahoma with about 400 sensors in 15 km area. Preliminary processing of the IRIS wavefield array resulted with about 20,000 microearthquakes ranging from M-1 to M2, while only 2 earthquakes are listed in the catalog during the same time period. A preliminary examination of the catalog reveals three similar magnitude earthquakes (M 2) occurred at similar locations within 9 seconds of each other. Utilizing this catalog, we will combine individual empirical Green's function (EGF) analysis and stacking over multiple EGFs to examine if there are any systematic variations of source time functions and spectral ratios across the array, which will provide constrains of rupture complexity, directivity and earthquake interactions. For example, this would help us to understand if these three earthquakes rupture overlapping fault patches from cascading failure, or from repeated rupture at the same slip patch due to external stress loading. Deciphering the interaction at smaller scales with near-field observations is important for a controlled earthquake experiment.
NASA Astrophysics Data System (ADS)
The past 2 decades have seen substantial progress in our understanding of the nature of the earthquake faulting process, but increasingly, the subject has become an interdisciplinary one. Thus, although the observation of radiated seismic waves remains the primary tool for studying earthquakes (and has been increasingly focused on extracting the physical processes occurring in the “source”), geological studies have also begun to play a more important role in understanding the faulting process. Additionally, defining the physical underpinning for these phenomena has come to be an important subject in experimental and theoretical rock mechanics.In recognition of this, a Maurice Ewing Symposium was held at Arden House, Harriman, N.Y. (the former home of the great American statesman Averill Harriman), May 20-23, 1985. The purpose of the meeting was to bring together the international community of experimentalists, theoreticians, and observationalists who are engaged in the study of various aspects of earthquake source mechanics. The conference was attended by more than 60 scientists from nine countries (France, Italy, Japan, Poland, China, the United Kingdom, United States, Soviet Union, and the Federal Republic of Germany).
Rupture, waves and earthquakes.
Uenishi, Koji
2017-01-01
Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.
Rupture, waves and earthquakes
UENISHI, Koji
2017-01-01
Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but “extraordinary” phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable. PMID:28077808
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.
NASA Technical Reports Server (NTRS)
Han, Shin-Chan; Sauber, Jeanne; Riva, Riccardo
2011-01-01
The 2011 great Tohoku-Oki earthquake, apart from shaking the ground, perturbed the motions of satellites orbiting some hundreds km away above the ground, such as GRACE, due to coseismic change in the gravity field. Significant changes in inter-satellite distance were observed after the earthquake. These unconventional satellite measurements were inverted to examine the earthquake source processes from a radically different perspective that complements the analyses of seismic and geodetic ground recordings. We found the average slip located up-dip of the hypocenter but within the lower crust, as characterized by a limited range of bulk and shear moduli. The GRACE data constrained a group of earthquake source parameters that yield increasing dip (7-16 degrees plus or minus 2 degrees) and, simultaneously, decreasing moment magnitude (9.17-9.02 plus or minus 0.04) with increasing source depth (15-24 kilometers). The GRACE solution includes the cumulative moment released over a month and demonstrates a unique view of the long-wavelength gravimetric response to all mass redistribution processes associated with the dynamic rupture and short-term postseismic mechanisms to improve our understanding of the physics of megathrusts.
Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan
NASA Astrophysics Data System (ADS)
Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.
2017-12-01
An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding the physical mechanism of the 2015 Torishima earthquake. First, the estimated large uplift within Smith Caldera implies the earthquake may be related to some volcanic activity of the caldera. Secondly, the modeled ring of subsidence surrounding the caldera suggests that the process may have included notable subsidence, at least on the northeastern side out of the caldera.
Real-Time Joint Streaming Data Processing from Social and Physical Sensors
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Y. Y.; Qin, J.; Tiampo, K. F.; Bauer, M.
2014-12-01
The results of the technological breakthroughs in computing that have taken place over the last few decades makes it possible to achieve emergency management objectives that focus on saving human lives and decreasing economic effects. In particular, the integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better real-time seismic hazard analysis through distributed computing networks. The main goal of this work is to utilize innovative computational algorithms for better real-time seismic risk analysis by integrating different data sources and processing tools into streaming and cloud computing applications. The Geological Survey of Canada operates the Canadian National Seismograph Network (CNSN) with over 100 high-gain instruments and 60 low-gain or strong motion seismographs. The processing of the continuous data streams from each station of the CNSN provides the opportunity to detect possible earthquakes in near real-time. The information from physical sources is combined to calculate a location and magnitude for an earthquake. The automatically calculated results are not always sufficiently precise and prompt that can significantly reduce the response time to a felt or damaging earthquake. Social sensors, here represented as Twitter users, can provide information earlier to the general public and more rapidly to the emergency planning and disaster relief agencies. We introduce joint streaming data processing from social and physical sensors in real-time based on the idea that social media observations serve as proxies for physical sensors. By using the streams of data in the form of Twitter messages, each of which has an associated time and location, we can extract information related to a target event and perform enhanced analysis by combining it with physical sensor data. Results of this work suggest that the use of data from social media, in conjunction with the development of innovative computing algorithms, when combined with sensor data can provide a new paradigm for real-time earthquake detection in order to facilitate rapid and inexpensive natural risk reduction.
NASA Astrophysics Data System (ADS)
Lin, T. C.; Hu, F.; Chen, X.; Lee, S. J.; Hung, S. H.
2017-12-01
Kinematic source model is widely used for the simulation of an earthquake, because of its simplicity and ease of application. On the other hand, dynamic source model is a more complex but important tool that can help us to understand the physics of earthquake initiation, propagation, and healing. In this study, we focus on the southernmost Ryukyu Trench which is extremely close to northern Taiwan. Interseismic GPS data in northeast Taiwan shows a pattern of strain accumulation, which suggests the maximum magnitude of a potential future earthquake in this area is probably about magnitude 8.7. We develop dynamic rupture models for the hazard estimation of the potential megathrust event based on the kinematic rupture scenarios which are inverted using the interseismic GPS data. Besides, several kinematic source rupture scenarios with different characterized slip patterns are also considered to constrain the dynamic rupture process better. The initial stresses and friction properties are tested using the trial-and-error method, together with the plate coupling and tectonic features. An analysis of the dynamic stress field associated with the slip prescribed in the kinematic models can indicate possible inconsistencies with physics of faulting. Furthermore, the dynamic and kinematic rupture models are considered to simulate the ground shaking from based on 3-D spectral-element method. We analyze ShakeMap and ShakeMovie from the simulation results to evaluate the influence over the island between different source models. A dispersive tsunami-propagation simulation is also carried out to evaluate the maximum tsunami wave height along the coastal areas of Taiwan due to coseismic seafloor deformation of different source models. The results of this numerical simulation study can provide a physically-based information of megathrust earthquake scenario for the emergency response agency to take the appropriate action before the really big one happens.
Anomalies of rupture velocity in deep earthquakes
NASA Astrophysics Data System (ADS)
Suzuki, M.; Yagi, Y.
2010-12-01
Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.
On the scale dependence of earthquake stress drop
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Tinti, Elisa; Cirella, Antonella
2016-10-01
We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.
NASA Astrophysics Data System (ADS)
Gu, N.; Zhang, H.
2017-12-01
Seismic imaging of fault zones generally involves seismic velocity tomography using first arrival times or full waveforms from earthquakes occurring around the fault zones. However, in most cases seismic velocity tomography only gives smooth image of the fault zone structure. To get high-resolution structure of the fault zones, seismic migration using active seismic data needs to be used. But it is generally too expensive to conduct active seismic surveys, even for 2D. Here we propose to apply the passive seismic imaging method based on seismic interferometry to image fault zone detailed structures. Seismic interferometry generally refers to the construction of new seismic records for virtual sources and receivers by cross correlating and stacking the seismic records on physical receivers from physical sources. In this study, we utilize seismic waveforms recorded on surface seismic stations for each earthquake to construct zero-offset seismic record at each earthquake location as if there was a virtual receiver at each earthquake location. We have applied this method to image the fault zone structure around the 2013 Mw6.6 Lushan earthquake. After the occurrence of the mainshock, a 29-station temporary array is installed to monitor aftershocks. In this study, we first select aftershocks along several vertical cross sections approximately normal to the fault strike. Then we create several zero-offset seismic reflection sections by seismic interferometry with seismic waveforms from aftershocks around each section. Finally we migrate these zero-offset sections to create seismic structures around the fault zones. From these migration images, we can clearly identify strong reflectors, which correspond to major reverse fault where the mainshock occurs. This application shows that it is possible to image detailed fault zone structures with passive seismic sources.
Localizing Submarine Earthquakes by Listening to the Water Reverberations
NASA Astrophysics Data System (ADS)
Castillo, J.; Zhan, Z.; Wu, W.
2017-12-01
Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.
NASA Astrophysics Data System (ADS)
Wen, Strong; Chang, Yi-Zen; Yeh, Yu-Lien; Wen, Yi-Ying
2017-04-01
Due to the complicated geomorphology and geological conditions, the southwest (SW) Taiwan suffers the invasion of various natural disasters, such as landslide, mud flow and especially the threat of strong earthquakes as result of convergence between the Eurasian and the Philippine Sea plate. Several disastrous earthquakes had occurred in this area and often caused serious hazards. Therefore, it is fundamentally important to understand the correlation between seismic activity and seismogenic structures in SW Taiwan. Previous studies have indicated that before the failure of rock strength, the behaviors of micro-earthquakes can provide essential clues to help investigating the process of rock deformation. Thus, monitoring the activity of micro-earthquakes plays an important role in studying fault rupture or crustal deformation before the occurrence of a large earthquake. Because the time duration of micro-earthquakes activity can last for years, this phenomenon can be used to indicate the change of physical properties in the crust, such as crustal stress changes or fluid migration. The main purpose of this research is to perform a nonlinear waveform inversion to investigate source parameters of micro-earthquakes which include the non-double couple components owing to the shear rupture usually associated with complex morphology as well as tectonic fault systems. We applied a nonlinear waveform procedure to investigate local stress status and source parameters of micro-earthquakes that occurred in SW Taiwan. Previous studies has shown that microseismic fracture behaviors were controlled by the non-double components, which could lead to cracks generating and fluid migration, which can result in changing rock volume and produce partial compensation. Our results not only giving better understanding the seismogenic structures in the SW Taiwan, but also allowing us to detect variations of physical parameters caused by crack propagating in stratum. Thus, the derived source parameters can serve as a detail physical status (such as fluid migration, fault geometry and the pressure of the leading edge of the rupturing) to investigate the characteristics of seismongenic structures more precisely. In addition, the obtained regional stress field in this study also used to assure and to exam the tectonic models proposed for SW Taiwan previously, which will help to properly assess seismic hazard analysis for major engineering construction projects in the urban area.
Navigating Earthquake Physics with High-Resolution Array Back-Projection
NASA Astrophysics Data System (ADS)
Meng, Lingsen
Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehler, M.; Bame, D.
1985-03-01
A study of the spectral properties of the waveforms recorded during hydraulic fracturing earthquakes has been carried out to obtain information about the physical dimensions of the earthquakes. We find two types of events. The first type has waveforms with clear P and S arrivals and spectra that are very similar to earthquakes occurring in tectonic regions. These events are interpreted as being due to shear slip along fault planes. The second type of event has waveforms that are similar in many ways to long period earthquakes observed at volcanoes and is called long period. Many waveforms of these eventsmore » are identical, which implies that these events represent repeated activation of a given source. We propose that the source of these long period events is the sudden opening of a channel that connects two cracks filled with fluid at different pressures. The sizes of the two cracks differ, which causes two or more peaks to appear in the spectra, each peak being associated with one physical dimension of the crack. From the frequencies at which spectral peaks occur, we estimate crack dimensions of between 3 and 22m. 13 refs., 8 figs.« less
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.
2015-07-01
This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.
Modeling fast and slow earthquakes at various scales
IDE, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138
Modeling fast and slow earthquakes at various scales.
Ide, Satoshi
2014-01-01
Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.
Vallée, Martin
2013-01-01
The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes.
Analysis of Earthquake Source Spectra in Salton Trough
NASA Astrophysics Data System (ADS)
Chen, X.; Shearer, P. M.
2009-12-01
Previous studies of the source spectra of small earthquakes in southern California show that average Brune-type stress drops vary among different regions, with particularly low stress drops observed in the Salton Trough (Shearer et al., 2006). The Salton Trough marks the southern end of the San Andreas Fault and is prone to earthquake swarms, some of which are driven by aseismic creep events (Lohman and McGuire, 2007). In order to learn the stress state and understand the physical mechanisms of swarms and slow slip events, we analyze the source spectra of earthquakes in this region. We obtain Southern California Seismic Network (SCSN) waveforms for earthquakes from 1977 to 2009 archived at the Southern California Earthquake Center (SCEC) data center, which includes over 17,000 events. After resampling the data to a uniform 100 Hz sample rate, we compute spectra for both signal and noise windows for each seismogram, and select traces with a P-wave signal-to-noise ratio greater than 5 between 5 Hz and 15 Hz. Using selected displacement spectra, we isolate the source spectra from station terms and path effects using an empirical Green’s function approach. From the corrected source spectra, we compute corner frequencies and estimate moments and stress drops. Finally we analyze spatial and temporal variations in stress drop in the Salton Trough and compare them with studies of swarms and creep events to assess the evolution of faulting and stress in the region. References: Lohman, R. B., and J. J. McGuire (2007), Earthquake swarms driven by aseismic creep in the Salton Trough, California, J. Geophys. Res., 112, B04405, doi:10.1029/2006JB004596 Shearer, P. M., G. A. Prieto, and E. Hauksson (2006), Comprehensive analysis of earthquake source spectra in southern California, J. Geophys. Res., 111, B06303, doi:10.1029/2005JB003979.
Earthquake source properties from pseudotachylite
Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan
2016-01-01
The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow, pseudotachylite is not representative of the shear strength during the earthquake that generated it, or unless the strength excess is larger than we have allowed.
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
Near real-time aftershock hazard maps for earthquakes
NASA Astrophysics Data System (ADS)
McCloskey, J.; Nalbant, S. S.
2009-04-01
Stress interaction modelling is routinely used to explain the spatial relationships between earthquakes and their aftershocks. On 28 October 2008 a M6.4 earthquake occurred near the Pakistan-Afghanistan border killing several hundred and causing widespread devastation. A second M6.4 event occurred 12 hours later 20km to the south east. By making some well supported assumptions concerning the source event and the geometry of any likely triggered event it was possible to map those areas most likely to experience further activity. Using Google earth, it would further have been possible to identify particular settlements in the source area which were particularly at risk and to publish their locations globally within about 3 hours of the first earthquake. Such actions could have significantly focused the initial emergency response management. We argue for routine prospective testing of such forecasts and dialogue between social and physical scientists and emergency response professionals around the practical application of these techniques.
What Can Sounds Tell Us About Earthquake Interactions?
NASA Astrophysics Data System (ADS)
Aiken, C.; Peng, Z.
2012-12-01
It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these interactions can tell us about the source characteristics of earthquakes and tremors.
NASA Astrophysics Data System (ADS)
Gallovič, F.
2017-09-01
Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.
Fractal analysis of the spatial distribution of earthquakes along the Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter
2014-05-01
The Hellenic Subduction Zone (HSZ) is the most seismically active region in Europe. Many destructive earthquakes have taken place along the HSZ in the past. The evolution of such active regions is expressed through seismicity and is characterized by complex phenomenology. The understanding of the tectonic evolution process and the physical state of subducting regimes is crucial in earthquake prediction. In recent years, there is a growing interest concerning an approach to seismicity based on the science of complex systems (Papadakis et al., 2013; Vallianatos et al., 2012). In this study we calculate the fractal dimension of the spatial distribution of earthquakes along the HSZ and we aim to understand the significance of the obtained values to the tectonic and geodynamic evolution of this area. We use the external seismic sources provided by Papaioannou and Papazachos (2000) to create a dataset regarding the subduction zone. According to the aforementioned authors, we define five seismic zones. Then, we structure an earthquake dataset which is based on the updated and extended earthquake catalogue for Greece and the adjacent areas by Makropoulos et al. (2012), covering the period 1976-2009. The fractal dimension of the spatial distribution of earthquakes is calculated for each seismic zone and for the HSZ as a unified system using the box-counting method (Turcotte, 1997; Robertson et al., 1995; Caneva and Smirnov, 2004). Moreover, the variation of the fractal dimension is demonstrated in different time windows. These spatiotemporal variations could be used as an additional index to inform us about the physical state of each seismic zone. As a precursor in earthquake forecasting, the use of the fractal dimension appears to be a very interesting future work. Acknowledgements Giorgos Papadakis wish to acknowledge the Greek State Scholarships Foundation (IKY). References Caneva, A., Smirnov, V., 2004. Using the fractal dimension of earthquake distributions and the slope of the recurrence curve to forecast earthquakes in Colombia. Earth Sci. Res. J., 8, 3-9. Makropoulos, K., Kaviris, G., Kouskouna, V., 2012. An updated and extended earthquake catalogue for Greece and adjacent areas since 1900. Nat. Hazards Earth Syst. Sci., 12, 1425-1430. Papadakis, G., Vallianatos, F., Sammonds, P., 2013. Evidence of non extensive statistical physics behavior of the Hellenic Subduction Zone seismicity. Tectonophysics, 608, 1037-1048. Papaioannou, C.A., Papazachos, B.C., 2000. Time-independent and time-dependent seismic hazard in Greece based on seismogenic sources. Bull. Seismol. Soc. Am., 90, 22-33. Robertson, M.C., Sammis, C.G., Sahimi, M., Martin, A.J., 1995. Fractal analysis of three-dimensional spatial distributions of earthquakes with a percolation interpretation. J. Geophys. Res., 100, 609-620. Turcotte, D.L., 1997. Fractals and chaos in geology and geophysics. Second Edition, Cambridge University Press. Vallianatos, F., Michas, G., Papadakis, G., Sammonds, P., 2012. A non-extensive statistical physics view to the spatiotemporal properties of the June 1995, Aigion earthquake (M6.2) aftershock sequence (West Corinth rift, Greece). Acta Geophys., 60, 758-768.
NASA Astrophysics Data System (ADS)
Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.
2016-12-01
The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw < 5). In 2015, two local earthquakes - Mw4.5 in 03/21/2015 and Mw4.1 in 08/18/2015 - have been recorded by both the Incorporated Research Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw < 5) and are shallow with focal depths of about 2 to 4 km. Such events are very common in oil/gas reservoirs all over the world, including North America, Europe, and the Middle East. We determined the location and source mechanism of these local earthquakes, with the uncertainties, using a Bayesian inversion method. The triggering stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.
NASA Astrophysics Data System (ADS)
Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni
2015-04-01
If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer non-physical arrivals. Finally we reconstruct earthquake seismograms at sensors that were previously active but were subsequently removed before the earthquakes occurred; thus we create virtual earthquake seismograms at those sensors, truly retrospectively. Such SRI seismograms can be used to create a catalogue of new, virtual earthquake seismograms that are available to complement real earthquake data in future earthquake seismology studies. [1]E. Entwistle, Curtis, A., Galetti, E., Baptie, B., Meles, G., Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales, JGR, in press.
Seismicity and source spectra analysis in Salton Sea Geothermal Field
NASA Astrophysics Data System (ADS)
Cheng, Y.; Chen, X.
2016-12-01
The surge of "man-made" earthquakes in recent years has led to considerable concerns about the associated hazards. Improved monitoring of small earthquakes would significantly help understand such phenomena and the underlying physical mechanisms. In the Salton Sea Geothermal field in southern California, open access of a local borehole network provides a unique opportunity to better understand the seismicity characteristics, the related earthquake hazards, and the relationship with the geothermal system, tectonic faulting and other physical conditions. We obtain high-resolution earthquake locations in the Salton Sea Geothermal Field, analyze characteristics of spatiotemporal isolated earthquake clusters, magnitude-frequency distributions and spatial variation of stress drops. The analysis reveals spatial coherent distributions of different types of clustering, b-value distributions, and stress drop distribution. The mixture type clusters (short-duration rapid bursts with high aftershock productivity) are predominately located within active geothermal field that correlate with high b-value, low stress drop microearthquake clouds, while regular aftershock sequences and swarms are distributed throughout the study area. The differences between earthquakes inside and outside of geothermal operation field suggest a possible way to distinguish directly induced seismicity due to energy operation versus typical seismic slip driven sequences. The spatial coherent b-value distribution enables in-situ estimation of probabilities for M≥3 earthquakes, and shows that the high large-magnitude-event (LME) probability zones with high stress drop are likely associated with tectonic faulting. The high stress drop in shallow (1-3 km) depth indicates the existence of active faults, while low stress drops near injection wells likely corresponds to the seismic response to fluid injection. I interpret the spatial variation of seismicity and source characteristics as the result of fluid circulation, the fracture network, and tectonic faulting.
NASA Astrophysics Data System (ADS)
Meng, Lingsen; Zhang, Ailin; Yagi, Yuji
2016-01-01
The 2015 Mw 7.8 Nepal-Gorkha earthquake with casualties of over 9000 people was the most devastating disaster to strike Nepal since the 1934 Nepal-Bihar earthquake. Its rupture process was imaged by teleseismic back projections (BP) of seismograms recorded by three, large regional networks in Australia, North America, and Europe. The source images of all three arrays reveal a unilateral eastward rupture; however, the propagation directions and speeds differ significantly between the arrays. To understand the spatial uncertainties of the BP analyses, we analyze four moderate size aftershocks recorded by all three arrays exactly as had been conducted for the main shock. The apparent source locations inferred from BPs are systematically biased from the catalog locations, as a result of a slowness error caused by three-dimensional Earth structures. We introduce a physics-based slowness correction that successfully mitigates the source location discrepancies among the arrays. Our calibrated BPs are found to be mutually consistent and reveal a unilateral rupture propagating eastward at a speed of 2.7 km/s, localized in a relatively narrow and deep swath along the downdip edge of the locked Himalayan thrust zone. We find that the 2015 Gorkha earthquake was a localized rupture that failed to break the entire Himalayan décollement to the surface, which can be regarded as an intermediate event during the interseismic period of larger Himalayan ruptures that break the whole seismogenic zone width. Thus, our physics-based slowness correction is an important technical improvement of BP, mitigating spatial uncertainties and improving the robustness of single and multiarray studies.
Hovsgol earthquake 5 December 2014, M W = 4.9: seismic and acoustic effects
NASA Astrophysics Data System (ADS)
Dobrynina, Anna A.; Sankov, Vladimir A.; Tcydypova, Larisa R.; German, Victor I.; Chechelnitsky, Vladimir V.; Ulzibat, Munkhuu
2018-03-01
A moderate shallow earthquake occurred on 5 December 2014 ( M W = 4.9) in the north of Lake Hovsgol (northern Mongolia). The infrasonic signal with duration 140 s was recorded for this earthquake by the "Tory" infrasound array (Institute of Solar-Terrestrial Physics of the Siberian Branch of the Russian Academy of Science, Russia). Source parameters of the earthquake (seismic moment, geometrical sizes, displacement amplitudes in the focus) were determined using spectral analysis of direct body P and S waves. The spectral analysis of seismograms and amplitude variations of the surface waves allows to determine the effect of the propagation of the rupture in the earthquake focus, the azimuth of the rupture propagation direction and the velocity of displacement in the earthquake focus. The results of modelling of the surface displacements caused by the Hovsgol earthquake and high effective velocity of propagation of infrasound signal ( 625 m/s) indicate that its occurrence is not caused by the downward movement of the Earth's surface in the epicentral region but by the effect of the secondary source. The position of the secondary source of infrasound signal is defined on the northern slopes of the Khamar-Daban ridge according to the data on the azimuth and time of arrival of acoustic wave at the Tory station. The interaction of surface waves with the regional topography is proposed as the most probable mechanism of formation of the infrasound signal.
Source complexity and the physical mechanism of the 2015 Mw 7.9 Bonin Island earthquake
NASA Astrophysics Data System (ADS)
Chen, Y.; Meng, L.; Wen, L.
2015-12-01
The 30 May 2015 Mw 7.9 Bonin Island earthquake is the largest instrument-recorded deep-focus earthquake in the Izu-Bonin arc. It occurred approximately 100 km deeper than the previous seismicity, in the region unlikely to be within the core of the subducting Izu-Bonin slab. The earthquake provides an unprecedented opportunity to understand the unexpected occurrence of such isolated deep earthquakes. Multiple source inversion of the P, SH, pP and sSH phases and a novel fully three-dimensional back-projection of P and pP phases are applied to study the coseismic source process. The subevents locations and short-period energy radiations both show a L-shape bilateral rupture propagating initially in the SW direction then in the NW direction with an average rupture speed of 2.0 km/s. The decrease of focal depth on the NW branch suggests that the rupture is consistent with a single sub-horizontal plane inferred from the GCMT solution. The multiple source inversion further indicates slight variation of the focal strikes of the sub-events with the curvature of the subducting Izu-Bonin slab. The rupture is confined within an area of 20 km x 35 km, rather compact compared with the shallow earthquake of similar magnitude. The earthquake is of high stress drop on the order of 100 MPa and a low seismic efficiency of 0.19, indicating large frictional heat dissipation. The only aftershock is 11 km to the east of the mainshock hypocenter and 3 km away from the centroid of the first sub-event. Analysis of the regional tomography and nearby seismicity suggests that the earthquake may occur at the edge/periphery of the bending slab and is unlikely to be within the "cold" metastable olivine wedge. Our results suggest the spontaneous nucleation of the thermally induced shear instability is a possible mechanism for such isolated deep earthquakes.
Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast
NASA Astrophysics Data System (ADS)
Toda, Shinji; Enescu, Bogdan
2011-03-01
Numerous studies retrospectively found that seismicity rate jumps (drops) by coseismic Coulomb stress increase (decrease). The Collaboratory for the Study of Earthquake Prediction (CSEP) instead provides us an opportunity for prospective testing of the Coulomb hypothesis. Here we adapt our stress transfer model incorporating rate and state dependent friction law to the CSEP Japan seismicity forecast. We demonstrate how to compute the forecast rates of large shocks in 2009 using the large earthquakes during the past 120 years. The time dependent impact of the coseismic stress perturbations explains qualitatively well the occurrence of the recent moderate size shocks. Such ability is partly similar to that of statistical earthquake clustering models. However, our model differs from them as follows: the off-fault aftershock zones can be simulated using finite fault sources; the regional areal patterns of triggered seismicity are modified by the dominant mechanisms of the potential sources; the imparted stresses due to large earthquakes produce stress shadows that lead to a reduction of the forecasted number of earthquakes. Although the model relies on several unknown parameters, it is the first physics based model submitted to the CSEP Japan test center and has the potential to be tuned for short-term earthquake forecasts.
Relating stick-slip friction experiments to earthquake source parameters
McGarr, Arthur F.
2012-01-01
Analytical results for parameters, such as static stress drop, for stick-slip friction experiments, with arbitrary input parameters, can be determined by solving an energy-balance equation. These results can then be related to a given earthquake based on its seismic moment and the maximum slip within its rupture zone, assuming that the rupture process entails the same physics as stick-slip friction. This analysis yields overshoots and ratios of apparent stress to static stress drop of about 0.25. The inferred earthquake source parameters static stress drop, apparent stress, slip rate, and radiated energy are robust inasmuch as they are largely independent of the experimental parameters used in their estimation. Instead, these earthquake parameters depend on C, the ratio of maximum slip to the cube root of the seismic moment. C is controlled by the normal stress applied to the rupture plane and the difference between the static and dynamic coefficients of friction. Estimating yield stress and seismic efficiency using the same procedure is only possible when the actual static and dynamic coefficients of friction are known within the earthquake rupture zone.
NASA Astrophysics Data System (ADS)
Kaneko, Yoshihiro; Wallace, Laura M.; Hamling, Ian J.; Gerstenberger, Matthew C.
2018-05-01
Slow slip events (SSEs) have been documented in subduction zones worldwide, yet their implications for future earthquake occurrence are not well understood. Here we develop a relatively simple, simulation-based method for estimating the probability of megathrust earthquakes following tectonic events that induce any transient stress perturbations. This method has been applied to the locked Hikurangi megathrust (New Zealand) surrounded on all sides by the 2016 Kaikoura earthquake and SSEs. Our models indicate the annual probability of a M≥7.8 earthquake over 1 year after the Kaikoura earthquake increases by 1.3-18 times relative to the pre-Kaikoura probability, and the absolute probability is in the range of 0.6-7%. We find that probabilities of a large earthquake are mainly controlled by the ratio of the total stressing rate induced by all nearby tectonic sources to the mean stress drop of earthquakes. Our method can be applied to evaluate the potential for triggering a megathrust earthquake following SSEs in other subduction zones.
Investigation of Pre-Earthquake Ionospheric Disturbances by 3D Tomographic Analysis
NASA Astrophysics Data System (ADS)
Yagmur, M.
2016-12-01
Ionospheric variations before earthquakes have been widely discussed phenomena in ionospheric studies. To clarify the source and mechanism of these phenomena is highly important for earthquake forecasting. To well understanding the mechanical and physical processes of pre-seismic Ionospheric anomalies that might be related even with Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling, both statistical and 3D modeling analysis are needed. For these purpose, firstly we have investigated the relation between Ionospheric TEC Anomalies and potential source mechanisms such as space weather activity and lithospheric phenomena like positive surface electric charges. To distinguish their effects on Ionospheric TEC, we have focused on pre-seismically active days. Then, we analyzed the statistical data of 54 earthquakes that M≽6 between 2000 and 2013 as well as the 2011 Tohoku and the 2016 Kumamoto Earthquakes in Japan. By comparing TEC anomaly and Solar activity by Dst Index, we have found that 28 events that might be related with Earthquake activity. Following the statistical analysis, we also investigate the Lithospheric effect on TEC change on selected days. Among those days, we have chosen two case studies as the 2011 Tohoku and the 2016 Kumamoto Earthquakes to make 3D reconstructed images by utilizing 3D Tomography technique with Neural Networks. The results will be presented in our presentation. Keywords : Earthquake, 3D Ionospheric Tomography, Positive and Negative Anomaly, Geomagnetic Storm, Lithosphere
NASA Astrophysics Data System (ADS)
Gischig, Valentin S.
2015-09-01
Earthquakes caused by fluid injection into deep underground reservoirs constitute an increasingly recognized risk to populations and infrastructure. Quantitative assessment of induced seismic hazard, however, requires estimating the maximum possible magnitude earthquake that may be induced during fluid injection. Here I seek constraints on an upper limit for the largest possible earthquake using source-physics simulations that consider rate-and-state friction and hydromechanical interaction along a straight homogeneous fault. Depending on the orientation of the pressurized fault in the ambient stress field, different rupture behaviors can occur: (1) uncontrolled rupture-front propagation beyond the pressure front or (2) rupture-front propagation arresting at the pressure front. In the first case, fault properties determine the earthquake magnitude, and the upper magnitude limit may be similar to natural earthquakes. In the second case, the maximum magnitude can be controlled by carefully designing and monitoring injection and thus restricting the pressurized fault area.
NASA Astrophysics Data System (ADS)
Viens, L.; Miyake, H.; Koketsu, K.
2016-12-01
Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.
NASA Astrophysics Data System (ADS)
Dreger, D. S.; Boyd, O. S.; Taira, T.; Gritto, R.
2017-12-01
Enhanced Geothermal System (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. Spatio-temporal source properties, including source dimension, rupture area, slip, rupture speed, and slip velocity of induced seismicity are of interest at The Geysers geothermal field, northern California to map the coseismic facture density of the EGS swarm. In this investigation we extend our previous finite-source analysis of selected M>4 earthquakes to examine source properties of smaller magnitude seismicity located in the Northwest Geysers Enhanced Geothermal System (EGS) demonstration project. Moment rate time histories of the source are found using empirical Green's function (eGf) deconvolution using the method of Mori (1993) as implemented by Dreger et al. (2007). The moment rate functions (MRFs) from data recorded using the Lawrence Berkeley National Laboratory (LBNL) short-period geophone network are inverted for finite-source parameters including the spatial distribution of fault slip, rupture velocity, and the orientation of the causative fault plane. The results show complexity in the MRF for the studied earthquakes. Thus far the estimated rupture area and the magnitude-area trend of the smaller magnitude Geysers seismicity is found to agree with the empirical relationships of Wells and Coppersmith (1994) and Leonard (2010), which were developed for much larger M>5.5 earthquakes worldwide indicating self-similar behavior extending to M2 earthquakes. We will present finite-source inversion results of the micro-earthquakes, attempting to extend the analysis to sub Mw, and demonstrate their magnitude-area scaling. The extension of the scaling laws will then enable the mapping of coseismic fracture density of the EGS swarm in the Northwest Geysers based on catalog moment magnitude estimates.
New Methodologies Applied to Seismic Hazard Assessment in Southern Calabria (Italy)
NASA Astrophysics Data System (ADS)
Console, R.; Chiappini, M.; Speranza, F.; Carluccio, R.; Greco, M.
2016-12-01
Although it is generally recognized that the M7+ 1783 and 1908 Calabria earthquakes were caused by normal faults rupturing the upper crust of the southern Calabria-Peloritani area, no consensus exists on seismogenic source location and orientation. A recent high-resolution low-altitude aeromagnetic survey of southern Calabria and Messina straits suggested that the sources of the 1783 and 1908 earthquakes are en echelon faults belonging to the same NW dipping normal fault system straddling the whole southern Calabria. The application of a newly developed physics-based earthquake simulator to the active fault system modeled by the data obtained from the aeromagnetic survey and other recent geological studies has allowed the production of catalogs lasting 100,000 years and containing more than 25,000 events of magnitudes ≥ 4.0. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate due to tectonic loading for every single segment in the investigated fault system, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one segment are allowed to expand into neighboring segments, if they are separated by a given maximum range of distance. The application of our simulation algorithm to Calabria region provides typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. These maps can be compared with the existing hazard maps that are presently used in the national seismic building regulations.
NASA Astrophysics Data System (ADS)
Wu, B.; Oglesby, D. D.; Ghosh, A.; LI, B.
2017-12-01
Very low frequency earthquakes (VLFE) and low frequency earthquakes (LFE) are two main types of seismic signal that are observed during slow earthquakes. These phenomena differ from standard ("fast") earthquakes in many ways. In contrast to seismic signals generated by standard earthquakes, these two types of signal lack energy at higher frequencies, and have very low stress drops of around 10 kPa. In addition, the Moment-Duration scaling relationship shown by VLFEs and LFEs is linear(M T) instead of M T^3 for regular earthquakes. However, if investigated separately over a small range magnitudes and durations, the scaling relationship for each is somewhat closer to M T^3, not M T. The physical mechanism of VLFEs and LFEs is still not clear, although some models have explored this issue [e.g., Gomberg, 2016b]. Here we investigate the behavior of dynamic rupture models with a ductile-like viscous frictional property [Ando et al., 2010; Nakata et al., 2011; Ando et al., 2012] on a single patch. In the model's framework, VLFE source patches are characterized by a high viscous damping term η and a larger area( 25km^2), while sources that approach LFE properties have a low viscous damping term η and smaller patch area(<0.5km^2). Using both analytical and numerical analyses, we show how and why this model may help to explain current observations. This model supports the idea that VLFEs and LFEs are distinct events, possibly rupturing distinct patches with their own stress dynamics [Hutchison and Ghosh, 2016]. The model also makes predictions that can be tested in future observational experiments.
Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms
NASA Astrophysics Data System (ADS)
Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng
2013-04-01
Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.
Geist, E.; Yoshioka, S.
1996-01-01
The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.
NASA Astrophysics Data System (ADS)
Yamada, T.; Ide, S.
2007-12-01
Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Petascale computation of multi-physics seismic simulations
NASA Astrophysics Data System (ADS)
Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.
2017-04-01
Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis. Lastly, we will conclude with an outlook on future exascale ADER-DG solvers for seismological applications.
Extension of Gutenberg-Richter distribution to MW -1.3, no lower limit in sight
NASA Astrophysics Data System (ADS)
Boettcher, Margaret S.; McGarr, A.; Johnston, Malcolm
2009-05-01
With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude M W -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as M W -3.9 are observed, but we find no evidence that M W -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
Hydrothermal response to a volcano-tectonic earthquake swarm, Lassen, California
Ingebritsen, Steven E.; Shelly, David R.; Hsieh, Paul A.; Clor, Laura; P.H. Seward,; Evans, William C.
2015-01-01
The increasing capability of seismic, geodetic, and hydrothermal observation networks allows recognition of volcanic unrest that could previously have gone undetected, creating an imperative to diagnose and interpret unrest episodes. A November 2014 earthquake swarm near Lassen Volcanic National Park, California, which included the largest earthquake in the area in more than 60 years, was accompanied by a rarely observed outburst of hydrothermal fluids. Although the earthquake swarm likely reflects upward migration of endogenous H2O-CO2 fluids in the source region, there is no evidence that such fluids emerged at the surface. Instead, shaking from the modest sized (moment magnitude 3.85) but proximal earthquake caused near-vent permeability increases that triggered increased outflow of hydrothermal fluids already present and equilibrated in a local hydrothermal aquifer. Long-term, multiparametric monitoring at Lassen and other well-instrumented volcanoes enhances interpretation of unrest and can provide a basis for detailed physical modeling.
Extension of Gutenberg-Richter distribution to Mw -1.3, no lower limit in sight
Boettcher, M.S.; McGarr, A.; Johnston, M.
2009-01-01
[1] With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude Mw -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as Mw -3.9 are observed, but we find no evidence that Mw -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
NASA Astrophysics Data System (ADS)
Melgar, D.; Bock, Y.; Crowell, B. W.; Haase, J. S.
2013-12-01
Computation of predicted tsunami wave heights and runup in the regions adjacent to large earthquakes immediately after rupture initiation remains a challenging problem. Limitations of traditional seismological instrumentation in the near field which cannot be objectively employed for real-time inversions and the non-unique source inversion results are a major concern for tsunami modelers. Employing near-field seismic, GPS and wave gauge data from the Mw 9.0 2011 Tohoku-oki earthquake, we test the capacity of static finite fault slip models obtained from newly developed algorithms to produce reliable tsunami forecasts. First we demonstrate the ability of seismogeodetic source models determined from combined land-based GPS and strong motion seismometers to forecast near-source tsunamis in ~3 minutes after earthquake origin time (OT). We show that these models, based on land-borne sensors only tend to underestimate the tsunami but are good enough to provide a realistic first warning. We then demonstrate that rapid ingestion of offshore shallow water (100 - 1000 m) wave gauge data significantly improves the model forecasts and possible warnings. We ingest data from 2 near-source ocean-bottom pressure sensors and 6 GPS buoys into the earthquake source inversion process. Tsunami Green functions (tGFs) are generated using the GeoClaw package, a benchmarked finite volume code with adaptive mesh refinement. These tGFs are used for a joint inversion with the land-based data and substantially improve the earthquake source and tsunami forecast. Model skill is assessed by detailed comparisons of the simulation output to 2000+ tsunami runup survey measurements collected after the event. We update the source model and tsunami forecast and warning at 10 min intervals. We show that by 20 min after OT the tsunami is well-predicted with a high variance reduction to the survey data and by ~30 minutes a model that can be considered final, since little changed is observed afterwards, is achieved. This is an indirect approach to tsunami warning, it relies on automatic determination of the earthquake source prior to tsunami simulation. It is more robust than ad-hoc approaches because it relies on computation of a finite-extent centroid moment tensor to objectively determine the style of faulting and the fault plane geometry on which to launch the heterogeneous static slip inversion. Operator interaction and physical assumptions are minimal. Thus, the approach can provide the initial conditions for tsunami simulation (seafloor motion) irrespective of the type of earthquake source and relies heavily on oceanic wave gauge measurements for source determination. It reliably distinguishes among strike-slip, normal and thrust faulting events, all of which have been observed recently to occur in subduction zones and pose distinct tsunami hazards.
The effect of segmented fault zones on earthquake rupture propagation and termination
NASA Astrophysics Data System (ADS)
Huang, Y.
2017-12-01
A fundamental question in earthquake source physics is what can control the nucleation and termination of an earthquake rupture. Besides stress heterogeneities and variations in frictional properties, damaged fault zones (DFZs) that surround major strike-slip faults can contribute significantly to earthquake rupture propagation. Previous earthquake rupture simulations usually characterize DFZs as several-hundred-meter-wide layers with lower seismic velocities than host rocks, and find earthquake ruptures in DFZs can exhibit slip pulses and oscillating rupture speeds that ultimately enhance high-frequency ground motions. However, real DFZs are more complex than the uniform low-velocity structures, and show along-strike variations of damages that may be correlated with historical earthquake ruptures. These segmented structures can either prohibit or assist rupture propagation and significantly affect the final sizes of earthquakes. For example, recent dense array data recorded at the San Jacinto fault zone suggests the existence of three prominent DFZs across the Anza seismic gap and the south section of the Clark branch, while no prominent DFZs were identified near the ends of the Anza seismic gap. To better understand earthquake rupture in segmented fault zones, we will present dynamic rupture simulations that calculate the time-varying rupture process physically by considering the interactions between fault stresses, fault frictional properties, and material heterogeneities. We will show that whether an earthquake rupture can break through the intact rock outside the DFZ depend on the nucleation size of the earthquake and the rupture propagation distance in the DFZ. Moreover, material properties of the DFZ, stress conditions along the fault, and friction properties of the fault also have a critical impact on rupture propagation and termination. We will also present scenarios of San Jacinto earthquake ruptures and show the parameter space that is favorable for rupture propagation through the Anza seismic gap. Our results suggest that a priori knowledge of properties of segmented fault zones is of great importance for predicting sizes of future large earthquakes on major faults.
McGarr, Arthur F.; Boettcher, M.; Fletcher, Jon Peter B.; Sell, Russell; Johnston, Malcolm J.; Durrheim, R.; Spottiswoode, S.; Milev, A.
2009-01-01
For one week during September 2007, we deployed a temporary network of field recorders and accelerometers at four sites within two deep, seismically active mines. The ground-motion data, recorded at 200 samples/sec, are well suited to determining source and ground-motion parameters for the mining-induced earthquakes within and adjacent to our network. Four earthquakes with magnitudes close to 2 were recorded with high signal/noise at all four sites. Analysis of seismic moments and peak velocities, in conjunction with the results of laboratory stick-slip friction experiments, were used to estimate source processes that are key to understanding source physics and to assessing underground seismic hazard. The maximum displacements on the rupture surfaces can be estimated from the parameter , where is the peak ground velocity at a given recording site, and R is the hypocentral distance. For each earthquake, the maximum slip and seismic moment can be combined with results from laboratory friction experiments to estimate the maximum slip rate within the rupture zone. Analysis of the four M 2 earthquakes recorded during our deployment and one of special interest recorded by the in-mine seismic network in 2004 revealed maximum slips ranging from 4 to 27 mm and maximum slip rates from 1.1 to 6.3 m/sec. Applying the same analyses to an M 2.1 earthquake within a cluster of repeating earthquakes near the San Andreas Fault Observatory at Depth site, California, yielded similar results for maximum slip and slip rate, 14 mm and 4.0 m/sec.
Earthquake source tensor inversion with the gCAP method and 3D Green's functions
NASA Astrophysics Data System (ADS)
Zheng, J.; Ben-Zion, Y.; Zhu, L.; Ross, Z.
2013-12-01
We develop and apply a method to invert earthquake seismograms for source properties using a general tensor representation and 3D Green's functions. The method employs (i) a general representation of earthquake potency/moment tensors with double couple (DC), compensated linear vector dipole (CLVD), and isotropic (ISO) components, and (ii) a corresponding generalized CAP (gCap) scheme where the continuous wave trains are broken into Pnl and surface waves (Zhu & Ben-Zion, 2013). For comparison, we also use the waveform inversion method of Zheng & Chen (2012) and Ammon et al. (1998). Sets of 3D Green's functions are calculated on a grid of 1 km3 using the 3-D community velocity model CVM-4 (Kohler et al. 2003). A bootstrap technique is adopted to establish robustness of the inversion results using the gCap method (Ross & Ben-Zion, 2013). Synthetic tests with 1-D and 3-D waveform calculations show that the source tensor inversion procedure is reasonably reliable and robust. As initial application, the method is used to investigate source properties of the March 11, 2013, Mw=4.7 earthquake on the San Jacinto fault using recordings of ~45 stations up to ~0.2Hz. Both the best fitting and most probable solutions include ISO component of ~1% and CLVD component of ~0%. The obtained ISO component, while small, is found to be a non-negligible positive value that can have significant implications for the physics of the failure process. Work on using higher frequency data for this and other earthquakes is in progress.
Jones, Lucile M.; Bernknopf, Richard; Cox, Dale; Goltz, James; Hudnut, Kenneth; Mileti, Dennis; Perry, Suzanne; Ponti, Daniel; Porter, Keith; Reichle, Michael; Seligson, Hope; Shoaf, Kimberley; Treiman, Jerry; Wein, Anne
2008-01-01
This is the initial publication of the results of a cooperative project to examine the implications of a major earthquake in southern California. The study comprised eight counties: Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura. Its results will be used as the basis of an emergency response and preparedness exercise, the Great Southern California ShakeOut, and for this purpose we defined our earthquake as occurring at 10:00 a.m. on November 13, 2008. As members of the southern California community use the ShakeOut Scenario to plan and execute the exercise, we anticipate discussion and feedback. This community input will be used to refine our assessment and will lead to a formal publication in early 2009. Our goal in the ShakeOut Scenario is to identify the physical, social and economic consequences of a major earthquake in southern California and in so doing, enable the users of our results to identify what they can change now?before the earthquake?to avoid catastrophic impact after the inevitable earthquake occurs. To do so, we had to determine the physical damages (casualties and losses) caused by the earthquake and the impact of those damages on the region?s social and economic systems. To do this, we needed to know about the earthquake ground shaking and fault rupture. So we first constructed an earthquake, taking all available earthquake research information, from trenching and exposed evidence of prehistoric earthquakes, to analysis of instrumental recordings of large earthquakes and the latest theory in earthquake source physics. We modeled a magnitude (M) 7.8 earthquake on the southern San Andreas Fault, a plausible event on the fault most likely to produce a major earthquake. This information was then fed forward into the rest of the ShakeOut Scenario. The damage impacts of the scenario earthquake were estimated using both HAZUS-MH and expert opinion through 13 special studies and 6 expert panels, and fall into four categories: building damages, non-structural damages, damage to lifelines and infrastructure, and fire losses. The magnitude 7.8 ShakeOut earthquake is modeled to cause about 1800 deaths and $213 billion of economic losses. These numbers are as low as they are because of aggressive retrofitting programs that have increased the seismic resistance of buildings, highways and lifelines, and economic resiliency. These numbers are as large as they are because much more retrofitting could still be done. The earthquake modeled here may never happen. Big earthquakes on the San Andreas Fault are inevitable, and by geologic standards extremely common, but probably will not be exactly like this one. The next very damaging earthquake could easily be on another fault. However, lessons learned from this particular event apply to many other events and could provide benefits in many possible future events.
Laboratory generated M -6 earthquakes
McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.
2014-01-01
We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.
In-situ investigation of relations between slow slip events, repeaters and earthquake nucleation
NASA Astrophysics Data System (ADS)
Marty, S. B.; Schubnel, A.; Gardonio, B.; Bhat, H. S.; Fukuyama, E.
2017-12-01
Recent observations have shown that, in subduction zones, imperceptible slip, known as "slow slip events", could trigger powerful earthquakes and could be link to the onset of swarms of repeaters. In the aim of investigating the relation between repeaters, slow slip events and earthquake nucleation, we have conducted stick-slip experiments on saw-cut Indian Gabbro under upper crustal stress conditions (up to 180 MPa confining pressure). During the past decades, the reproduction of micro-earthquakes in the laboratory enabled a better understanding and to better constrain physical parameters that are the origin of the seismic source. Using a new set of calibrated piezoelectric acoustic emission sensors and high frequency dynamic strain gages, we are now able to measure a large number of physical parameters during stick-slip motion, such as the rupture velocity, the slip velocity, the dynamic stress drop and the absolute magnitudes and sizes of foreshock acoustic emissions. Preliminary observations systemically show quasi-static slip accelerations, onset of repeaters as well as an increase in the acoustic emission rate before failure. In the next future, we will further investigate the links between slow slip events, repeaters, stress build-up and earthquakes, using our high-frequency acoustic and strain recordings and applying template matching analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferdowsi, Behrooz; Griffa, Michele; Guyer, Robert A.
A fundamental mystery in earthquake physics is “how can an earthquake be triggered by distant seismic sources?” We use discrete element method simulations of a granular layer, during stick slip, that is subject to transient vibrational excitation to gain further insight into the physics of dynamic earthquake triggering. We also observe delayed triggering of slip in the granular gouge, using Coulomb friction law for grains interaction. We find that at a critical vibrational amplitude (strain) there is an abrupt transition from negligible time-advanced slip (clock advance) to full clock advance; i.e., transient vibration and triggered slip are simultaneous. Moreover, themore » critical strain is of order 10 -6, similar to observations in the laboratory and in Earth. The transition is related to frictional weakening of the granular layer due to a dramatic decrease in coordination number and the weakening of the contact force network. Associated with this frictional weakening is a pronounced decrease in the elastic modulus of the layer. The study has important implications for mechanisms of triggered earthquakes and induced seismic events and points out the underlying processes in response of the fault gouge to dynamic transient stresses.« less
Ferdowsi, Behrooz; Griffa, Michele; Guyer, Robert A.; ...
2015-11-19
A fundamental mystery in earthquake physics is “how can an earthquake be triggered by distant seismic sources?” We use discrete element method simulations of a granular layer, during stick slip, that is subject to transient vibrational excitation to gain further insight into the physics of dynamic earthquake triggering. We also observe delayed triggering of slip in the granular gouge, using Coulomb friction law for grains interaction. We find that at a critical vibrational amplitude (strain) there is an abrupt transition from negligible time-advanced slip (clock advance) to full clock advance; i.e., transient vibration and triggered slip are simultaneous. Moreover, themore » critical strain is of order 10 -6, similar to observations in the laboratory and in Earth. The transition is related to frictional weakening of the granular layer due to a dramatic decrease in coordination number and the weakening of the contact force network. Associated with this frictional weakening is a pronounced decrease in the elastic modulus of the layer. The study has important implications for mechanisms of triggered earthquakes and induced seismic events and points out the underlying processes in response of the fault gouge to dynamic transient stresses.« less
Pollitz, F.; Bakun, W.H.; Nyst, M.
2004-01-01
Understanding of the behavior of plate boundary zones has progressed to the point where reasonably comprehensive physical models can predict their evolution. The San Andreas fault system in the San Francisco Bay region (SFBR) is dominated by a few major faults whose behavior over about one earthquake cycle is fairly well understood. By combining the past history of large ruptures on SFBR faults with a recently proposed physical model of strain accumulation in the SFBR, we derive the evolution of regional stress from 1838 until the present. This effort depends on (1) an existing compilation of the source properties of historic and contemporary SFBR earthquakes based on documented shaking, geodetic data, and seismic data (Bakun, 1999) and (2) a few key parameters of a simple regional viscoelastic coupling model constrained by recent GPS data (Pollitz and Nyst, 2004). Although uncertainties abound in the location, magnitude, and fault geometries of historic ruptures and the physical model relies on gross simplifications, the resulting stress evolution model is sufficiently detailed to provide a useful window into the past stress history. In the framework of Coulomb failure stress, we find that virtually all M ??? 5.8 earthquakes prior to 1906 and M ??? 5.5 earthquakes after 1906 are consistent with stress triggering from previous earthquakes. These events systematically lie in zones of predicted stress concentration elevated 5-10 bars above the regional average. The SFBR is predicted to have emerged from the 1906 "shadow" in about 1980, consistent with the acceleration in regional seismicity at that time. The stress evolution model may be a reliable indicator of the most likely areas to experience M ??? 5.5 shocks in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchings, L J; Foxall, W; Rambo, J
2005-02-14
Yucca Mountain licensing will require estimation of ground motions from probabilistic seismic hazard analyses (PSHA) with annual probabilities of exceedance on the order of 10{sup -6} to 10{sup -7} per year or smaller, which correspond to much longer earthquake return periods than most previous PSHA studies. These long return periods for the Yucca Mountain PSHA result in estimates of ground motion that are extremely high ({approx} 10 g) and that are believed to be physically unrealizable. However, there is at present no generally accepted method to bound ground motions either by showing that the physical properties of materials cannot maintainmore » such extreme motions, or the energy release by the source for such large motions is physically impossible. The purpose of this feasibility study is to examine recorded ground motion and rock property data from nuclear explosions to determine its usefulness for studying the ground motion from extreme earthquakes. The premise is that nuclear explosions are an extreme energy density source, and that the recorded ground motion will provide useful information about the limits of ground motion from extreme earthquakes. The data were categorized by the source and rock properties, and evaluated as to what extent non-linearity in the material has affected the recordings. They also compiled existing results of non-linear dynamic modeling of the explosions carried out by LLNL and other institutions. They conducted an extensive literature review to outline current understanding of extreme ground motion. They also analyzed the data in terms of estimating maximum ground motions at Yucca Mountain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchings, L H; Foxall, W; Rambo, J
2005-03-09
Yucca Mountain licensing will require estimation of ground motions from probabilistic seismic hazard analyses (PSHA) with annual probabilities of exceedance on the order of 10{sup -6} to 10{sup -7} per year or smaller, which correspond to much longer earthquake return periods than most previous PSHA studies. These long return periods for the Yucca Mountain PSHA result in estimates of ground motion that are extremely high ({approx} 10 g) and that are believed to be physically unrealizable. However, there is at present no generally accepted method to bound ground motions either by showing that the physical properties of materials cannot maintainmore » such extreme motions, or the energy release by the source for such large motions is physically impossible. The purpose of this feasibility study is to examine recorded ground motion and rock property data from nuclear explosions to determine its usefulness for studying the ground motion from extreme earthquakes. The premise is that nuclear explosions are an extreme energy density source, and that the recorded ground motion will provide useful information about the limits of ground motion from extreme earthquakes. The data were categorized by the source and rock properties, and evaluated as to what extent non-linearity in the material has affected the recordings. They also compiled existing results of non-linear dynamic modeling of the explosions carried out by LLNL and other institutions. They conducted an extensive literature review to outline current understanding of extreme ground motion. They also analyzed the data in terms of estimating maximum ground motions at Yucca Mountain.« less
Broadband Analysis of the Energetics of Earthquakes and Tsunamis in the Sunda Forearc from 1987-2012
NASA Astrophysics Data System (ADS)
Choy, G. L.; Kirby, S. H.; Hayes, G. P.
2013-12-01
In the eighteen years before the 2004 Sumatra Mw 9.1 earthquake, the forearc off Sumatra experienced only one large (Mw > 7.0) thrust event and experienced no earthquakes that generated measurable tsunami wave heights. In the subsequent eight years, twelve large thrust earthquakes occurred of which half generated measurable tsunamis. The number of broadband earthquakes (those events with Mw > 5.5 for which broadband teleseismic waveforms have sufficient signal to compute depths, focal mechanisms, moments and radiated energies) jumped six fold after 2004. The progression of tsunami earthquakes, as well as the profuse increase in broadband activity, strongly suggests regional stress adjustments following the Sumatra 2004 megathrust earthquake. Broadband source parameters, published routinely in the Source Parameters (SOPAR) database of the USGS's NEIC (National Earthquake Information Center), have provided the most accurate depths and locations of big earthquakes since the implementation of modern digital seismographic networks. Moreover, radiated energy and seismic moment (also found in SOPAR) are related to apparent stress which is a measure of fault maturity. In mapping apparent stress as a function of depth and focal mechanism, we find that about 12% of broadband thrust earthquakes in the subduction zone are unequivocally above or below the slab interface. Apparent stresses of upper-plate events are associated with failure on mature splay faults, some of which generated measurable tsunamis. One unconventional source for local wave heights was a large intraslab earthquake. High-energy upper-plate events, which are dominant in the Aceh Basin, are associated with immature faults, which may explain why the region was bypassed by significant rupture during the 2004 Sumatra earthquake. The majority of broadband earthquakes are non-randomly concentrated under the outer-arc high. They appear to delineate the periphery of the contiguous rupture zones of large earthquakes. A not uncommon occurrence at the outer-arc high is that of a large (Mw >7.0) earthquake followed by another event, also of large magnitude, in very close spatial (<50 km) proximity within a short time (days to months). The physical separation between these events provides constraints on the nature of barriers to rupture propagation. Some of the glaring disparities in seismic damage and tsunami excitation for earthquakes with the same magnitude can be attributed to differences between rupture properties landward and seaward of the outer-arc high. Although most of the studied broadband earthquakes occurred in the wake of the Sumatra 2004 megathrust event, they illuminate tectonic features that exert a strong influence on rupture growth and extent. The application of broadband analysis to other island arcs will complement current criteria for evaluating seismic and tsunami potential
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
Energy Partition and Variability of Earthquakes
NASA Astrophysics Data System (ADS)
Kanamori, H.
2003-12-01
During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy mechanically dissipated during faulting. In the context of the slip-weakening model, EG can be estimated from Δ W0 and ER. Alternatively, EG can be estimated from the laboratory data on the surface energy, the grain size and the total volume of newly formed fault gouge. This method suggests that, for crustal earthquakes, EG/E_R is very small, less than 0.2 even for extreme cases, for earthquakes with MW>7. This is consistent with the EG estimated with seismological methods, and the fast rupture speeds during most large earthquakes. For shallow subduction-zone earthquakes, EG/E_R varies substantially depending on the tectonic environments. EH: Direct estimation of EH is difficult. However, even with modest friction, EH can be very large, enough to melt or even dissociate a significant amount of material near the slip zone for large events with large slip, and the associated thermal effects may have significant effects on fault dynamics. The energy partition varies significantly for different types of earthquakes, e.g. large earthquakes on mature faults, large earthquakes on faults with low slip rates, subduction-zone earthquakes, deep focus earthquakes etc; this variability manifests itself in the difference in the evolution of seismic slip pattern. The different behaviors will be illustrated using the examples for large earthquakes, including, the 2001 Kunlun, the 1998 Balleny Is., the 1994 Bolivia, the 2001 India earthquake, the 1999 Chi-Chi, and the 2002 Denali earthquakes.
Understanding dynamic friction through spontaneously evolving laboratory earthquakes
Rubino, V.; Rosakis, A. J.; Lapusta, N.
2017-01-01
Friction plays a key role in how ruptures unzip faults in the Earth’s crust and release waves that cause destructive shaking. Yet dynamic friction evolution is one of the biggest uncertainties in earthquake science. Here we report on novel measurements of evolving local friction during spontaneously developing mini-earthquakes in the laboratory, enabled by our ultrahigh speed full-field imaging technique. The technique captures the evolution of displacements, velocities and stresses of dynamic ruptures, whose rupture speed range from sub-Rayleigh to supershear. The observed friction has complex evolution, featuring initial velocity strengthening followed by substantial velocity weakening. Our measurements are consistent with rate-and-state friction formulations supplemented with flash heating but not with widely used slip-weakening friction laws. This study develops a new approach for measuring local evolution of dynamic friction and has important implications for understanding earthquake hazard since laws governing frictional resistance of faults are vital ingredients in physically-based predictive models of the earthquake source. PMID:28660876
Scaling relation between earthquake magnitude and the departure time from P wave similar growth
Noda, Shunta; Ellsworth, William L.
2016-01-01
We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.
Seismic hazard analysis with PSHA method in four cities in Java.
NASA Astrophysics Data System (ADS)
Elistyawati, Y.; Palupi, I. R.; Suharsono
2016-11-01
In this study the tectonic earthquakes was observed through the peak ground acceleration through the PSHA method by dividing the area of the earthquake source. This study applied the earthquake data from 1965 - 2015 that has been analyzed the completeness of the data, location research was the entire Java with stressed in four large cities prone to earthquakes. The results were found to be a hazard map with a return period of 500 years, 2500 years return period, and the hazard curve were four major cities (Jakarta, Bandung, Yogyakarta, and the city of Banyuwangi). Results Java PGA hazard map 500 years had a peak ground acceleration within 0 g ≥ 0.5 g, while the return period of 2500 years had a value of 0 to ≥ 0.8 g. While, the PGA hazard curves on the city's most influential source of the earthquake was from sources such as fault Cimandiri backgroud, for the city of Bandung earthquake sources that influence the seismic source fault dent background form. In other side, the city of Yogyakarta earthquake hazard curve of the most influential was the source of the earthquake background of the Opak fault, and the most influential hazard curve of Banyuwangi earthquake was the source of Java and Sumba megatruts earthquake.
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.
2016-12-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.
NASA Astrophysics Data System (ADS)
Liu, Bo-Yan; Shi, Bao-Ping; Zhang, Jian
2007-05-01
In this study, a composite source model has been used to calculate the realistic strong ground motions in Beijing area, caused by 1679 M S8.0 earthquake in Sanhe-Pinggu. The results could provide us the useful physical parameters for the future seismic hazard analysis in this area. Considering the regional geological/geophysical background, we simulated the scenario earthquake with an associated ground motions in the area ranging from 39.3°N to 41.1°N in latitude and from 115.35°E to 117.55°E in longitude. Some of the key factors which could influence the characteristics of strong ground motion have been discussed, and the resultant peak ground acceleration (PGA) distribution and the peak ground velocity (PGV) distribution around Beijing area also have been made as well. A comparison of the simulated result with the results derived from the attenuation relation has been made, and a sufficient discussion about the advantages and disadvantages of composite source model also has been given in this study. The numerical results, such as the PGA, PGV, peak ground displacement (PGD), and the three-component time-histories developed for Beijing area, have a potential application in earthquake engineering field and building code design, especially for the evaluation of critical constructions, government decision making and the seismic hazard assessment by financial/insurance companies.
Earthquake early warning for Romania - most recent improvements
NASA Astrophysics Data System (ADS)
Marmureanu, Alexandru; Elia, Luca; Martino, Claudio; Colombelli, Simona; Zollo, Aldo; Cioflan, Carmen; Toader, Victorin; Marmureanu, Gheorghe; Marius Craiu, George; Ionescu, Constantin
2014-05-01
EWS for Vrancea earthquakes uses the time interval (28-32 sec.) between the moment when the earthquake is detected by the local seismic network installed in the epicenter area (Vrancea) and the arrival time of the seismic waves in the protected area (Bucharest) to send earthquake warning to users. In the last years, National Institute for Earth Physics (NIEP) upgraded its seismic network in order to cover better the seismic zones of Romania. Currently the National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T,STS2, SH-1, S13, Ranger, gs21, Mark l22) and acceleration sensors (Episensor). Recent improvement of the seismic network and real-time communication technologies allows implementation of a nation-wide EEWS for Vrancea and other seismic sources from Romania. We present a regional approach to Earthquake Early Warning for Romania earthquakes. The regional approach is based on PRESTo (Probabilistic and Evolutionary early warning SysTem) software platform: PRESTo processes in real-time three channel acceleration data streams: once the P-waves arrival have been detected, it provides earthquake location and magnitude estimations, and peak ground motion predictions at target sites. PRESTo is currently implemented in real- time at National Institute for Earth Physics, Bucharest for several months in parallel with a secondary EEWS. The alert notification is issued only when both systems validate each other. Here we present the results obtained using offline earthquakes originating from Vrancea area together with several real-time detection of significant earthquakes from Vrancea and Transylvania areas that occurred in the last months. Currently the warning notification is sent to several users including emergency response units from 12 counties, a big bridge located in Bucharest, a nuclear sterilization facility in Măgurele city and to the nuclear power plant from Cernavoda.
Methodology to determine the parameters of historical earthquakes in China
NASA Astrophysics Data System (ADS)
Wang, Jian; Lin, Guoliang; Zhang, Zhe
2017-12-01
China is one of the countries with the longest cultural tradition. Meanwhile, China has been suffering very heavy earthquake disasters; so, there are abundant earthquake recordings. In this paper, we try to sketch out historical earthquake sources and research achievements in China. We will introduce some basic information about the collections of historical earthquake sources, establishing intensity scale and the editions of historical earthquake catalogues. Spatial-temporal and magnitude distributions of historical earthquake are analyzed briefly. Besides traditional methods, we also illustrate a new approach to amend the parameters of historical earthquakes or even identify candidate zones for large historical or palaeo-earthquakes. In the new method, a relationship between instrumentally recorded small earthquakes and strong historical earthquakes is built up. Abundant historical earthquake sources and the achievements of historical earthquake research in China are of valuable cultural heritage in the world.
Modeling Explosion Induced Aftershocks
NASA Astrophysics Data System (ADS)
Kroll, K.; Ford, S. R.; Pitarka, A.; Walter, W. R.; Richards-Dinger, K. B.
2017-12-01
Many traditional earthquake-explosion discrimination tools are based on properties of the seismic waveform or their spectral components. Common discrimination methods include estimates of body wave amplitude ratios, surface wave magnitude scaling, moment tensor characteristics, and depth. Such methods are limited by station coverage and noise. Ford and Walter (2010) proposed an alternate discrimination method based on using properties of aftershock sequences as a means of earthquakeexplosion differentiation. Previous studies have shown that explosion sources produce fewer aftershocks that are generally smaller in magnitude compared to aftershocks of similarly sized earthquake sources (Jarpe et al., 1994, Ford and Walter, 2010). It has also been suggested that the explosion-induced aftershocks have smaller Gutenberg- Richter b-values (Ryall and Savage, 1969) and that their rates decay faster than a typical Omori-like sequence (Gross, 1996). To discern whether these observations are generally true of explosions or are related to specific site conditions (e.g. explosion proximity to active faults, tectonic setting, crustal stress magnitudes) would require a thorough global analysis. Such a study, however, is hindered both by lack of evenly distributed explosion-sources and the availability of global seismicity data. Here, we employ two methods to test the efficacy of explosions at triggering aftershocks under a variety of physical conditions. First, we use the earthquake rate equations from Dieterich (1994) to compute the rate of aftershocks related to an explosion source assuming a simple spring-slider model. We compare seismicity rates computed with these analytical solutions to those produced by the 3D, multi-cycle earthquake simulator, RSQSim. We explore the relationship between geological conditions and the characteristics of the resulting explosion-induced aftershock sequence. We also test hypothesis that aftershock generation is dependent upon the frequency content of the passing dynamic seismic waves as suggested by Parsons and Velasco (2009). Lastly, we compare all results of explosion-induced aftershocks with aftershocks generated by similarly sized earthquake sources. Prepared by LLNL under Contract DE-AC52-07NA27344.
Construction of Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Kubo, H.
2013-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.
Earthquake scaling laws for rupture geometry and slip heterogeneity
NASA Astrophysics Data System (ADS)
Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro
2016-04-01
We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
NASA Astrophysics Data System (ADS)
Graves, Robert; Jordan, Thomas H.; Callaghan, Scott; Deelman, Ewa; Field, Edward; Juve, Gideon; Kesselman, Carl; Maechling, Philip; Mehta, Gaurang; Milner, Kevin; Okaya, David; Small, Patrick; Vahi, Karan
2011-03-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i.e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and magnitude uncertainty estimates used in the definition of the ruptures than is found in the traditional GMPE approach. This reinforces the need for continued development of a better understanding of earthquake source characterization and the constitutive relations that govern the earthquake rupture process.
CyberShake: A Physics-Based Seismic Hazard Model for Southern California
Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.
2011-01-01
CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and magnitude uncertainty estimates used in the definition of the ruptures than is found in the traditional GMPE approach. This reinforces the need for continued development of a better understanding of earthquake source characterization and the constitutive relations that govern the earthquake rupture process. ?? 2010 Springer Basel AG.
Time-Lapse Monitoring with 4D Seismic Coda Waves in Active, Passive and Ambient Noise Data
NASA Astrophysics Data System (ADS)
Lumley, D. E.; Kamei, R.; Saygin, E.; Shragge, J. C.
2017-12-01
The Earth's subsurface is continuously changing, due to temporal variations in fluid flow, stress, temperature, geomechanics and geochemistry, for example. These physical changes occur at broad tectonic and earthquake scales, and also at very detailed near-surface and reservoir scales. Changes in the physical states of the earth cause time-varying changes in the physical properties of rocks and fluids, which can be monitored with natural or manmade seismic waves. Time-lapse (4D) seismic monitoring is important for applications related to natural and induced seismicity, hydrocarbon and groundwater reservoir depletion, CO2 sequestration etc. An exciting new research area involves moving beyond traditional methods in order to use the full complex time-lapse scattered wavefield (4D coda waves) for both manmade active-source 3D/4D seismic data, and also to use continuous recordings of natural-source passive seismic data, especially (micro) earthquakes and ocean ambient noise. This research involves full wave-equation approaches including waveform inversion (FWI), interferometry, Large N sensor arrays, "big data" information theory, and high performance supercomputing (HPC). I will present high-level concepts and recent data results that are quite spectacular and highly encouraging.
Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis
2015-01-01
The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Xu, H.; Pasyanos, M. E.; Pyle, M. L.; Matzel, E.; Mellors, R. J.; Hauk, T. F.
2012-12-01
It is well established empirically that regional distance (200-1600 km) amplitude ratios of seismic P-to-S waves at sufficiently high frequencies (~>2 Hz) can identify explosions among a background of natural earthquakes. However the physical basis for the generation of explosion S-waves, and therefore the predictability of this P/S technique as a function of event properties such as size, depth, geology and path, remains incompletely understood. A goal of the Source Physics Experiments (SPE) at the Nevada National Security Site (NNSS, formerly the Nevada Test Site (NTS)) is to improve our physical understanding of the mechanisms of explosion S-wave generation and advance our ability to numerically model and predict them. Current models of explosion P/S values suggest they are frequency dependent with poor performance below the source corner frequencies and good performance above. This leads to expectations that small magnitude explosions might require much higher frequencies (>10 Hz) to identify them. Interestingly the 1-ton chemical source physics explosions SPE2 and SPE3 appear to discriminate well from background earthquakes in the frequency band 6-8 Hz, where P and S signals are visible at the NVAR array located near Mina, NV about 200 km away. NVAR is a primary seismic station in the International Monitoring System (IMS), part of the Comprehensive nuclear-Test-Ban Treaty (CTBT). The NVAR broadband element NV31 is co-located with the LLNL station MNV that recorded many NTS nuclear tests, allowing the comparison. We find the small SPE explosions in granite have similar Pn/Lg values at 6-8 Hz as the past nuclear tests mainly in softer rocks. We are currently examining a number of other stations in addition to NVAR, including the dedicated SPE stations that recorded the SPE explosions at much closer distances with very high sample rates, in order to better understand the observed frequency dependence as compared with the model predictions. We plan to use these observations to improve our explosion models and our ability to understand and predict where P/S methods of identifying explosions work and any circumstances where they may not. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Geophysical Anomalies and Earthquake Prediction
NASA Astrophysics Data System (ADS)
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of Earthquake Predictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.
NASA Astrophysics Data System (ADS)
Milner, K. R.; Shaw, B. E.; Gilchrist, J. J.; Jordan, T. H.
2017-12-01
Probabilistic seismic hazard analysis (PSHA) is typically performed by combining an earthquake rupture forecast (ERF) with a set of empirical ground motion prediction equations (GMPEs). ERFs have typically relied on observed fault slip rates and scaling relationships to estimate the rate of large earthquakes on pre-defined fault segments, either ignoring or relying on expert opinion to set the rates of multi-fault or multi-segment ruptures. Version 3 of the Uniform California Earthquake Rupture Forecast (UCERF3) is a significant step forward, replacing expert opinion and fault segmentation with an inversion approach that matches observations better than prior models while incorporating multi-fault ruptures. UCERF3 is a statistical model, however, and doesn't incorporate the physics of earthquake nucleation, rupture propagation, and stress transfer. We examine the feasibility of replacing UCERF3, or components therein, with physics-based rupture simulators such as the Rate-State Earthquake Simulator (RSQSim), developed by Dieterich & Richards-Dinger (2010). RSQSim simulations on the UCERF3 fault system produce catalogs of seismicity that match long term rates on major faults, and produce remarkable agreement with UCERF3 when carried through to PSHA calculations. Averaged over a representative set of sites, the RSQSim-UCERF3 hazard-curve differences are comparable to the small differences between UCERF3 and its predecessor, UCERF2. The hazard-curve agreement between the empirical and physics-based models provides substantial support for the PSHA methodology. RSQSim catalogs include many complex multi-fault ruptures, which we compare with the UCERF3 rupture-plausibility metrics as well as recent observations. Complications in generating physically plausible kinematic descriptions of multi-fault ruptures have thus far prevented us from using UCERF3 in the CyberShake physics-based PSHA platform, which replaces GMPEs with deterministic ground motion simulations. RSQSim produces full slip/time histories that can be directly implemented as sources in CyberShake, without relying on the conditional hypocenter and slip distributions needed for the UCERF models. We also compare RSQSim with time-dependent PSHA calculations based on multi-fault renewal models.
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.
NASA Astrophysics Data System (ADS)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
On the relation of earthquake stress drop and ground motion variability
NASA Astrophysics Data System (ADS)
Oth, Adrien; Miyake, Hiroe; Bindi, Dino
2017-07-01
One of the key parameters for earthquake source physics is stress drop since it can be directly linked to the spectral level of ground motion. Stress drop estimates from moment corner frequency analysis have been shown to be extremely variable, and this to a much larger degree than expected from the between-event ground motion variability. This discrepancy raises the question whether classically determined stress drop variability is too large, which would have significant consequences for seismic hazard analysis. We use a large high-quality data set from Japan with well-studied stress drop data to address this issue. Nonparametric and parametric reference ground motion models are derived, and the relation of between-event residuals for Japan Meteorological Agency equivalent seismic intensity and peak ground acceleration with stress drop is analyzed for crustal earthquakes. We find a clear correlation of the between-event residuals with stress drops estimates; however, while the island of Kyushu is characterized by substantially larger stress drops than Honshu, the between-event residuals do not reflect this observation, leading to the appearance of two event families with different stress drop levels yet similar range of between-event residuals. Both the within-family and between-family stress drop variations are larger than expected from the ground motion between-event variability. A systematic common analysis of these parameters holds the potential to provide important constraints on the relative robustness of different groups of data in the different parameter spaces and to improve our understanding on how much of the observed source parameter variability is likely to be true source physics variability.
The Global Earthquake Model - Past, Present, Future
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Stein, Ross
2014-05-01
The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange. Sharing of data and risk information, best practices, and approaches across the globe are key to assessing risk more effectively. Through consortium driven global projects, open-source IT development and collaborations with more than 10 regions, leading experts are developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. The year 2013 has seen the completion of ten global data sets or components addressing various aspects of earthquake hazard and risk, as well as two GEM-related, but independently managed regional projects SHARE and EMME. Notably, the International Seismological Centre (ISC) led the development of a new ISC-GEM global instrumental earthquake catalogue, which was made publicly available in early 2013. It has set a new standard for global earthquake catalogues and has found widespread acceptance and application in the global earthquake community. By the end of 2014, GEM's OpenQuake computational platform will provide the OpenQuake hazard/risk assessment software and integrate all GEM data and information products. The public release of OpenQuake is planned for the end of this 2014, and will comprise the following datasets and models: • ISC-GEM Instrumental Earthquake Catalogue (released January 2013) • Global Earthquake History Catalogue [1000-1903] • Global Geodetic Strain Rate Database and Model • Global Active Fault Database • Tectonic Regionalisation Model • Global Exposure Database • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerabilities Database • Socio-Economic Vulnerability and Resilience Indicators • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) • Repository of national hazard models • Uniform global hazard model Armed with these tools and databases, stakeholders worldwide will then be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Earthquake hazard information will be able to be combined with data on exposure (buildings, population) and data on their vulnerability, for risk assessment around the globe. Furthermore, for a truly integrated view of seismic risk, users will be able to add social vulnerability and resilience indices and estimate the costs and benefits of different risk management measures. Having finished its first five-year Work Program at the end of 2013, GEM has entered into its second five-year Work Program 2014-2018. Beyond maintaining and enhancing the products developed in Work Program 1, the second phase will have a stronger focus on regional hazard and risk activities, and on seeing GEM products used for risk assessment and risk management practice at regional, national and local scales. Furthermore GEM intends to partner with similar initiatives underway for other natural perils, which together are needed to meet the need for advanced risk assessment methods, tools and data to underpin global disaster risk reduction efforts under the Hyogo Framework for Action #2 to be launched in Sendai/Japan in spring 2015
NASA Astrophysics Data System (ADS)
Ide, Satoshi; Maury, Julie
2018-04-01
Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Do weak global stresses synchronize earthquakes?
NASA Astrophysics Data System (ADS)
Bendick, R.; Bilham, R.
2017-08-01
Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.
Testing earthquake source inversion methodologies
Page, M.; Mai, P.M.; Schorlemmer, D.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
NASA Astrophysics Data System (ADS)
Cocco, M.; Feuillet, N.; Nostro, C.; Musumeci, C.
2003-04-01
We investigate the mechanical interactions between tectonic faults and volcanic sources through elastic stress transfer and discuss the results of several applications to Italian active volcanoes. We first present the stress modeling results that point out a two-way coupling between Vesuvius eruptions and historical earthquakes in Southern Apennines, which allow us to provide a physical interpretation of their statistical correlation. Therefore, we explore the elastic stress interaction between historical eruptions at the Etna volcano and the largest earthquakes in Eastern Sicily and Calabria. We show that the large 1693 seismic event caused an increase of compressive stress along the rift zone, which can be associated to the lack of flank eruptions of the Etna volcano for about 70 years after the earthquake. Moreover, the largest Etna eruptions preceded by few decades the large 1693 seismic event. Our modeling results clearly suggest that all these catastrophic events are tectonically coupled. We also investigate the effect of elastic stress perturbations on the instrumental seismicity caused by magma inflation at depth both at the Etna and at the Alban Hills volcanoes. In particular, we model the seismicity pattern at the Alban Hills volcano (central Italy) during a seismic swarm occurred in 1989-90 and we interpret it in terms of Coulomb stress changes caused by magmatic processes in an extensional tectonic stress field. We verify that the earthquakes occur in areas of Coulomb stress increase and that their faulting mechanisms are consistent with the stress perturbation induced by the volcanic source. Our results suggest a link between faults and volcanic sources, which we interpret as a tectonic coupling explaining the seismicity in a large area surrounding the volcanoes.
NASA Astrophysics Data System (ADS)
Rogozea, Maria; Radulian, Mircea; Placinta, Anica; Toma-Danila, Dragos
2017-04-01
A pair of moderate earthquakes of similar magnitude (Mw = 5.6) occurred in the Vrancea seismic source, a well-defined seismicity nest located in the mantle, beneath the South-Eastern Carpathians Arc in Romania. The two events are separated in time by two months (September 23, 2016 at 23:11:20 GMT and December 27, 2016 at 23:20:55 GMT). They are located close each other (45.7140N, 26.6180E, h = 92 km, and 45.7090N, 26.6030E, h = 99 km, respectively) and could be considered as belonging to an earthquake doublet. Similar doublets generated in the same depth range were recorded in 01 August 1985 (Mw = 5.2 and 5.8) and in 30-31 May 1990 (Mw = 6.9 and 6.4). The main purpose of this paper is to investigate comparatively the macroseismic effects associated to the earthquake doublet of 2016 and to analyze possible correlations with source characteristics, acceleration distribution and focal mechanism. Macroseismic information is collected using the on-line questionnaires from the website of the National Institute for Earth Physics (NIEP) and of the European Mediterranean Seismological Center (EMSC). The two earthquakes were felt over an extended area covering most of the Romania, north of Bulgaria, Republic of Moldova and south of Ukraine. We estimate the maximum observed intensity at V (MSK-64 scale). Although the two events have similar locations, time of occurrence and focal mechanism, significant differences were reported in the way that they were felt: on September 2017 the effects were stronger toward NE (Moldova) and SE (Dobrogea), while on December 2017 they were stronger toward NW (Transylvania) and SW (Romanian Plain). Possible source effects (complexity, rupture size) are investigated in this respect.
NASA Astrophysics Data System (ADS)
Vallianatos, Filippos
2015-04-01
Despite the extreme complexity that characterizes earthquake generation process, simple phenomenology seems to apply in the collective properties of seismicity. The best known is the Gutenberg-Richter relation. Short and long-term clustering, power-law scaling and scale-invariance have been exhibited in the spatio-temporal evolution of seismicity providing evidence for earthquakes as a nonlinear dynamic process. Regarding the physics of "many" earthquakes and how this can be derived from first principles, one may wonder, how can the collective properties of a set formed by all earthquakes in a given region, be derived and how does the structure of seismicity depend on its elementary constituents - the earthquakes? What are these properties? The physics of many earthquakes has to be studied with a different approach than the physics of one earthquake making the use of statistical physics necessary to understand the collective properties of earthquakes. Then a natural question arises. What type of statistical physics is appropriate to commonly describe effects from the microscale and crack opening level to the level of large earthquakes? An answer to the previous question could be non-extensive statistical physics, introduced by Tsallis (1988), as the appropriate methodological tool to describe entities with (multi) fractal distributions of their elements and where long-range interactions or intermittency are important, as in fracturing phenomena and earthquakes. In the present work, we review some fundamental properties of earthquake physics and how these are derived by means of non-extensive statistical physics. The aim is to understand aspects of the underlying physics that lead to the evolution of the earthquake phenomenon introducing the new topic of non-extensive statistical seismology. This research has been funded by the European Union (European Social Fund) and Greek national resources under the framework of the "THALES Program: SEISMO FEAR HELLARC" project. References F. Vallianatos, "A non-extensive approach to risk assessment", Nat. Hazards Earth Syst. Sci., 9, 211-216, 2009 F. Vallianatos and P. Sammonds "Is plate tectonics a case of non-extensive thermodynamics?" Physica A: Statistical Mechanics and its Applications, 389 (21), 4989-4993, 2010, F. Vallianatos, G. Michas, G. Papadakis and P. Sammonds " A non extensive statistical physics view to the spatiotemporal properties of the June 1995, Aigion earthquake (M6.2) aftershock sequence (West Corinth rift, Greece)", Acta Geophysica, 60(3), 758-768, 2012 F. Vallianatos and L. Telesca, Statistical mechanics in earth physics and natural hazards (editorial), Acta Geophysica, 60, 3, 499-501, 2012 F. Vallianatos, G. Michas, G. Papadakis and A. Tzanis "Evidence of non-extensivity in the seismicity observed during the 2011-2012 unrest at the Santorini volcanic complex, Greece" Nat. Hazards Earth Syst. Sci.,13,177-185, 2013 F. Vallianatos and P. Sammonds, "Evidence of non-extensive statistical physics of the lithospheric instability approaching the 2004 Sumatran-Andaman and 2011 Honshu mega-earthquakes" Tectonophysics, 590 , 52-58, 2013 G. Papadakis, F. Vallianatos, P. Sammonds, " Evidence of Nonextensive Statistical Physics behavior of the Hellenic Subduction Zone seismicity" Tectonophysics, 608, 1037 -1048, 2013 G. Michas, F. Vallianatos, and P. Sammonds, Non-extensivity and long-range correlations in the earthquake activity at the West Corinth rift (Greece) Nonlin. Processes Geophys., 20, 713-724, 2013
NASA Astrophysics Data System (ADS)
Rolland, Lucie M.; Vergnolle, Mathilde; Nocquet, Jean-Mathieu; Sladen, Anthony; Dessa, Jean-Xavier; Tavakoli, Farokh; Nankali, Hamid Reza; Cappa, FréDéRic
2013-06-01
It has previously been suggested that ionospheric perturbations triggered by large dip-slip earthquakes might offer additional source parameter information compared to the information gathered from land observations. Based on 3D modeling of GPS- and GLONASS-derived total electron content signals recorded during the 2011 Van earthquake (thrust, intra-plate event, Mw = 7.1, Turkey), we confirm that coseismic ionospheric signals do contain important information about the earthquake source, namely its slip mode. Moreover, we show that part of the ionospheric signal (initial polarity and amplitude distribution) is not related to the earthquake source, but is instead controlled by the geomagnetic field and the geometry of the Global Navigation Satellite System satellites constellation. Ignoring these non-tectonic effects would lead to an incorrect description of the earthquake source. Thus, our work emphasizes the added caution that should be used when analyzing ionospheric signals for earthquake source studies.
NASA Astrophysics Data System (ADS)
Rolland, L. M.; Vergnolle, M.; Nocquet, J.; Sladen, A.; Dessa, J.; Tavakoli, F.; Nankali, H.; Cappa, F.
2013-12-01
It has previously been suggested that ionospheric perturbations triggered by large dip-slip earthquakes might offer additional source parameter information compared to the information gathered from land observations. Based on 3D modeling of GPS- and GLONASS-derived total electron content signals recorded during the 2011 Van earthquake (thrust, intra-plate event, Mw = 7.1, Turkey), we confirm that coseismic ionospheric signals do contain important information about the earthquake source, namely its slip mode. Moreover, we show that part of the ionospheric signal (initial polarity and amplitude distribution) is not related to the earthquake source, but is instead controlled by the geomagnetic field and the geometry of the Global Navigation Satellite System satellites constellation. Ignoring these non-tectonic effects would lead to an incorrect description of the earthquake source. Thus, our work emphasizes the added caution that should be used when analyzing ionospheric signals for earthquake source studies.
Earthquake nucleation on faults with rate-and state-dependent strength
Dieterich, J.H.
1992-01-01
Dieterich, J.H., 1992. Earthquake nucleation on faults with rate- and state-dependent strength. In: T. Mikumo, K. Aki, M. Ohnaka, L.J. Ruff and P.K.P. Spudich (Editors), Earthquake Source Physics and Earthquake Precursors. Tectonophysics, 211: 115-134. Faults with rate- and state-dependent constitutive properties reproduce a range of observed fault slip phenomena including spontaneous nucleation of slip instabilities at stresses above some critical stress level and recovery of strength following slip instability. Calculations with a plane-strain fault model with spatially varying properties demonstrate that accelerating slip precedes instability and becomes localized to a fault patch. The dimensions of the fault patch follow scaling relations for the minimum critical length for unstable fault slip. The critical length is a function of normal stress, loading conditions and constitutive parameters which include Dc, the characteristic slip distance. If slip starts on a patch that exceeds the critical size, the length of the rapidly accelerating zone tends to shrink to the characteristic size as the time of instability approaches. Solutions have been obtained for a uniform, fixed-patch model that are in good agreement with results from the plane-strain model. Over a wide range of conditions, above the steady-state stress, the logarithm of the time to instability linearly decreases as the initial stress increases. Because nucleation patch length and premonitory displacement are proportional to Dc, the moment of premonitory slip scales by D3c. The scaling of Dc is currently an open question. Unless Dc for earthquake faults is significantly greater than that observed on laboratory faults, premonitory strain arising from the nucleation process for earthquakes may by too small to detect using current observation methods. Excluding the possibility that Dc in the nucleation zone controls the magnitude of the subsequent earthquake, then the source dimensions of the smallest earthquakes in a region provide an upper limit for the size of the nucleation patch. ?? 1992.
The 26 December 2004 tsunami source estimated from satellite radar altimetry and seismic waves
NASA Technical Reports Server (NTRS)
Song, Tony Y.; Ji, Chen; Fu, L. -L.; Zlotnicki, Victor; Shum, C. K.; Yi, Yuchan; Hjorleifsdottir, Vala
2005-01-01
The 26 December 2004 Indian Ocean tsunami was the first earthquake tsunami of its magnitude to occur since the advent of both digital seismometry and satellite radar altimetry. Both have independently recorded the event from different physical aspects. The seismic data has then been used to estimate the earthquake fault parameters, and a three-dimensional ocean-general-circulation-model (OGCM) coupled with the fault information has been used to simulate the satellite-observed tsunami waves. Here we show that these two datasets consistently provide the tsunami source using independent methodologies of seismic waveform inversion and ocean modeling. Cross-examining the two independent results confirms that the slip function is the most important condition controlling the tsunami strength, while the geometry and the rupture velocity of the tectonic plane determine the spatial patterns of the tsunami.
Mega-thrust and Intra-slab Earthquakes Beneath Tokyo Metropolitan Area
NASA Astrophysics Data System (ADS)
Hirata, N.; Sato, H.; Koketsu, K.; Hagiwara, H.; Wu, F.; Okaya, D.; Iwasaki, T.; Kasahara, K.
2006-12-01
In central Japan the Philippine Sea plate (PSP) subducts beneath the Tokyo Metropolitan area, the Kanto region, where it causes mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. The vertical proximity of this down going lithospheric plate is of concern because the greater Tokyo urban region has a population of 42 million and is the center of approximately 40% of the nation's economic activities. A M7+ earthquake in this region at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The M7+ earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In 2002, a consortium of universities and government agencies in Japan started the Special Project for Earthquake Disaster Mitigation in Urban Areas, a project to improve information needed for seismic hazards analyses of the largest urban centers. Assessment in Kanto of the seismic hazard produced by the Philippine Sea Plate (PSP) mega-thrust earthquakes requires identification of all significant faults and possible earthquake scenarios and rupture behavior, regional characterizations of PSP geometry and the overlying Honshu arc physical properties (e.g., seismic wave velocities, densities, attenuation), and local near-surface seism ic site effects. Our study addresses (1) improved regional characterization of the PSP geometry based on new deep seismic reflection profiles (Sato etal.,2005), reprocessed off-shore profiles (Kimura et al.,2005), and a dense seismic array in the Boso peninsular (Hagiwara et al., 2006) and (2) identification of asperities of the mega-thrust at the top of the PSP. We qualitatively examine the relationship between seismic reflections and asperities inferred by reflection physical properties. We also discuss the relation between deformation of PSP and intra-slab M7+ earthquakes: the PSP is subducting beneath the Hoshu arc and also colliding with the Pacific plate. The subduction and collision both contribute active seismicity in the Kanto region. We present a high resolution tomographic image to show a low velocity zone which suggests a possible internal failure of the slab; a source region of the M7+ intra-slab earthquake. Our study contributes a new assessment of the seismic hazard in the Tokyo metropolitan area. tokyo.ac.jp/daidai/index-J.html
NASA Astrophysics Data System (ADS)
Gunawan, I.; Cummins, P. R.; Ghasemi, H.; Suhardjono, S.
2012-12-01
Indonesia is very prone to natural disasters, especially earthquakes, due to its location in a tectonically active region. In September-October 2009 alone, intraslab and crustal earthquakes caused the deaths of thousands of people, severe infrastructure destruction and considerable economic loss. Thus, both intraslab and crustal earthquakes are important sources of earthquake hazard in Indonesia. Analysis of response spectra for these intraslab and crustal earthquakes are needed to yield more detail about earthquake properties. For both types of earthquakes, we have analysed available Indonesian seismic waveform data to constrain source and path parameters - i.e., low frequency spectral level, Q, and corner frequency - at reference stations that appear to be little influenced by site response.. We have considered these analyses for the main shocks as well as several aftershocks. We obtain corner frequencies that are reasonably consistent with the constant stress drop hypothesis. Using these results, we consider using them to extract information about site response form other stations form the Indonesian strong motion network that appear to be strongly affected by site response. Such site response data, as well as earthquake source parameters, are important for assessing earthquake hazard in Indonesia.
NASA Astrophysics Data System (ADS)
Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing
2018-03-01
We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
NASA Astrophysics Data System (ADS)
Zettergren, M. D.; Snively, J. B.; Inchin, P.; Komjathy, A.; Verkhoglyadova, O. P.
2017-12-01
Ocean and solid earth responses during earthquakes are a significant source of large amplitude acoustic and gravity waves (AGWs) that perturb the overlying ionosphere-thermosphere (IT) system. IT disturbances are routinely detected following large earthquakes (M > 7.0) via GPS total electron content (TEC) observations, which often show acoustic wave ( 3-4 min periods) and gravity wave ( 10-15 min) signatures with amplitudes of 0.05-2 TECU. In cases of very large earthquakes (M > 8.0) the persisting acoustic waves are estimated to have 100-200 m/s compressional velocities in the conducting ionospheric E and F-regions and should generate significant dynamo currents and magnetic field signatures. Indeed, some recent reports (e.g. Hao et al, 2013, JGR, 118, 6) show evidence for magnetic fluctuations, which appear to be related to AGWs, following recent large earthquakes. However, very little quantitative information is available on: (1) the detailed spatial and temporal dependence of these magnetic fluctuations, which are usually observed at a small number of irregularly arranged stations, and (2) the relation of these signatures to TEC perturbations in terms of relative amplitudes, frequency, and timing for different events. This work investigates space- and time-dependent behavior of both TEC and magnetic fluctuations following recent large earthquakes, with the aim to improve physical understanding of these perturbations via detailed, high-resolution, two- and three-dimensional modeling case studies with a coupled neutral atmospheric and ionospheric model, MAGIC-GEMINI (Zettergren and Snively, 2015, JGR, 120, 9). We focus on cases inspired by the large Chilean earthquakes from the past decade (viz., the M > 8.0 earthquakes from 2010 and 2015) to constrain the sources for the model, i.e. size, frequency, amplitude, and timing, based on available information from ocean buoy and seismometer data. TEC data are used to validate source amplitudes and to constrain background ionospheric conditions. Preliminary comparisons against available magnetic field and TEC data from these events provide evidence, albeit limited and localized, that support the validity of the spatially-resolved simulation results.
Monitoring Seismic Velocity Change to Explore the Earthquake Seismogenic Structures
NASA Astrophysics Data System (ADS)
Liao, C. F.; Wen, S.; Chen, C.
2017-12-01
Studying spatial-temporal variations of subsurface velocity structures is still a challenge work, but it can provide important information not only on geometry of a fault, but also the rheology change induced from the strong earthquake. In 1999, a disastrous Chi-Chi earthquake (Mw7.6; Chi-Chi EQ) occurred in central Taiwan and caused great impacts on Taiwan's society. Therefore, the major objective of this research is to investigate whether the rheology change of fault can be associated with seismogenic process before strong earthquake. In addition, after the strike of the Chi-Chi EQ, whether the subsurface velocity structure resumes to its steady state is another issue in this study. Therefore, for the above purpose, we have applied a 3D tomographic technique to obtain P- and S-wave velocity structures in central Taiwan using travel time data provided by the Central Weather Bureau (CWB). One major advantage of this method is that we can include out-of-network data to improve the resolution of velocity structures at deeper depths in our study area. The results show that the temporal variations of Vp are less significant than Vs (or Vp/Vs ratio), and Vp is not prominent perturbed before and after the occurrence of the Chi-Chi EQ. However, the Vs (or Vp/Vs ratio) structure in the source area demonstrates significant spatial-temporal difference before and after the mainshock. From the results, before the mainshock, Vs began to decrease (Vp/Vs ratio was increased as well) at the hanging wall of Chelungpu fault, which may be induced by the increasing density of microcracks and fluid. But in the vicinities of Chi-Chi Earthquake's source area, Vs was increasing (Vp/Vs ratio was also decreased). This phenomenon may be owing to the closing of cracks or migration of fluid. Due to the different physical characteristics around the source area, strong earthquake may be easily nucleated at the junctional zone. Our findings suggest that continuously monitoring the Vp and Vs (or Vp/Vs ratio) structures in high seismic potential zones is an important task which can lead to reduce seismic hazard for a future large earthquake.
Source parameters derived from seismic spectrum in the Jalisco block
NASA Astrophysics Data System (ADS)
Gutierrez, Q. J.; Escudero, C. R.; Nunez-Cornu, F. J.
2012-12-01
The direct measure of the earthquake fault dimension represent a complicated task nevertheless a better approach is using the seismic waves spectrum. With this method we can estimate the dimensions of the fault, the stress drop and the seismic moment. The study area comprises the complex tectonic configuration of Jalisco block and the subduction of the Rivera plate beneath the North American plate; this causes that occur in Jalisco some of the most harmful earthquakes and other related natural disasters. Accordingly it is important to monitor and perform studies that helps to understand the physics of earthquake rupture mechanism in the area. The main proposue of this study is estimate earthquake seismic source parameters. The data was recorded by the MARS network (Mapping the Riviera Subduction Zone) and the RESAJ network. MARS had 51 stations and settled in the Jalisco block; that is delimited by the mesoamerican trench at the west, the Colima grabben to the south, and the Tepic-Zacoalco to the north; for a period of time, of January 1, 2006 until December 31, 2007 Of this network was taken 104 events, the magnitude range of these was between 3 to 6.5 MB. RESJAL has 10 stations and is within the state of Jalisco, began to record since October 2011 and continues to record. We firs remove the trend, the mean and the instrument response, then manually chosen the S wave, then the multitaper method was used to obtain the spectrum of this wave and so estimate the corner frequency and the spectra level. We substitude the obtained in the equations of the Brune model to calculate the source parameters. Doing this we obtained the following results; the source radius was between .1 to 2 km, the stress drop was between .1 to 2 MPa.
Earthquake cycles and physical modeling of the process leading up to a large earthquake
NASA Astrophysics Data System (ADS)
Ohnaka, Mitiyasu
2004-08-01
A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.
Prevention of strong earthquakes: Goal or utopia?
NASA Astrophysics Data System (ADS)
Mukhamediev, Sh. A.
2010-11-01
In the present paper, we consider ideas suggesting various kinds of industrial impact on the close-to-failure block of the Earth’s crust in order to break a pending strong earthquake (PSE) into a number of smaller quakes or aseismic slips. Among the published proposals on the prevention of a forthcoming strong earthquake, methods based on water injection and vibro influence merit greater attention as they are based on field observations and the results of laboratory tests. In spite of this, the cited proofs are, for various reasons, insufficient to acknowledge the proposed techniques as highly substantiated; in addition, the physical essence of these methods has still not been fully understood. First, the key concept of the methods, namely, the release of the accumulated stresses (or excessive elastic energy) in the source region of a forthcoming strong earthquake, is open to objection. If we treat an earthquake as a phenomenon of a loss in stability, then, the heterogeneities of the physicomechanical properties and stresses along the existing fault or its future trajectory, rather than the absolute values of stresses, play the most important role. In the present paper, this statement is illustrated by the classical examples of stable and unstable fractures and by the examples of the calculated stress fields, which were realized in the source regions of the tsunamigenic earthquakes of December 26, 2004 near the Sumatra Island and of September 29, 2009 near the Samoa Island. Here, just before the earthquakes, there were no excessive stresses in the source regions. Quite the opposite, the maximum shear stresses τmax were close to their minimum value, compared to τmax in the adjacent territory. In the present paper, we provide quantitative examples that falsify the theory of the prevention of PSE in its current form. It is shown that the measures for the prevention of PSE, even when successful for an already existing fault, can trigger or accelerate a catastrophic earthquake because of dynamic fault propagation in the intact region. Some additional aspects of prevention of PSE are discussed. We conclude that in the near future, it is too early to consider the problem of prevention of a forthcoming strong earthquake as a practical task; otherwise, the results can prove to be very different from the desired ones. Nevertheless, it makes sense to continue studying this problem. The theoretical research and experimental investigation of the structure and properties of the regions where the prevention of a forthcoming strong earthquake is planned in the future are of primary importance.
NASA Astrophysics Data System (ADS)
Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.
2010-12-01
Three dimensional velocity structure model of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction earthquakes. The model is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural earthquakes. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The model has been revised through numbers of simulations of small to middle earthquakes as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai Earthquake with the 3-D velocity structure model. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source model and newly developed velocity structure model. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small earthquake
Pseudo-dynamic source characterization accounting for rough-fault effects
NASA Astrophysics Data System (ADS)
Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin
2016-04-01
Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.
NASA Astrophysics Data System (ADS)
Del Gaudio, S.; Lancieri, M.; Hok, S.; Satriano, C.; Chartier, T.; Scotti, O.; Bernard, P.
2016-12-01
Predictions of realistic ground motion for potential future earthquakes are always an interesting task for seismologists and are also the main objective of seismic hazard assessment. While, on one hand, numerical simulations have become more and more accurate and several different techniques have been developed, on the other hand ground motion prediction equations (GMPEs) have become a powerful instrument (due to great improvement of seismic strong motion networks providing a large amount of data). Nevertheless GMPEs do not represent the whole variety of source processes and this can lead to incorrect estimates especially in the near fault conditions because of the lack of records of large earthquakes at short distances. In such cases, physics-based ground motion simulations can be a valid tool to complement prediction equations for scenario studies, provided that both source and propagation are accurately described. We present here a comparison between numerical simulations performed in near fault conditions using two different kinematic source models, which are based on different assumptions and parameterizations: the "k-2 model" and the "fractal model". Wave propagation is taken into account using hybrid Green's function (HGF), which consists in coupling numerical Green's function with an empirical Green's function (EGF) approach. The advantage of this technique is that it does not require a very detailed knowledge of the propagation medium, but requires availability of high quality records of small earthquakes in the target area. The first application we show is on L'Aquila 2009 M 6.3 earthquake, where the main event records provide a benchmark for the synthetic waveforms. Here we can clearly observe which are the limitations of these techniques and investigate which are the physical parameters that are effectively controlling the ground motion level. The second application is a blind test on Upper Rhine Graben (URG) where active faults producing micro seismic activity are very close to sites of interest needing a careful investigation of seismic hazard. Finally we will perform a probabilistic seismic hazard analysis (PSHA) for the URG using numerical simulations to define input ground motion for different scenarios and compare them with a classical probabilistic study based on GMPEs.
Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty
Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon
2006-01-01
Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.
Shallow seismicity in volcanic system: what role does the edifice play?
NASA Astrophysics Data System (ADS)
Bean, Chris; Lokmer, Ivan
2017-04-01
Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.
Source Analysis of Bucaramanga Nest Intermediate-Depth Earthquakes
NASA Astrophysics Data System (ADS)
Prieto, G. A.; Pedraza, P.; Dionicio, V.; Levander, A.
2016-12-01
Intermediate-depth earthquakes are those that occur at depths of 50 to 300 km in subducting lithosphere and can occasionally be destructive. Despite their ubiquity in earthquake catalogs, their physical mechanism remains unclear because ambient temperatures and pressures at such depths are expected to lead to ductile flow, rather than brittle failure, as a response to stress. Intermediate-depth seismicity rates vary substantially worldwide, even within a single subduction zone having highly clustered seismicity in some cases (Vrancea, Hindu-Kush, etc.). One such places in known as the Bucaramanga Nest (BN), one of the highest concentration of intermediate-depth earthquakes in the world. Previous work on these earthquakes has shown 1) Focal mechanisms vary substantially within a very small volume. 2) Radiation efficiency is small for M<5 events. 3) repeating and reverse polarity events are present. 4) Larger events show a complex behavior with two distinct rupture stages. Due to on-going efforts by the Colombian Geological Survey (SGC) to densify the national seismic network, it is now possible to better constrain the rupture behavior of these events. In our work we will present results from focal mechanisms based on waveform inversion as well as polarity and S/P amplitude ratios. These results will be contrasted to the detection and classification of repeating families. For the larger events we will determine source parameters and radiation efficiencies. Preliminary results show that reverse polarity events are present and that two main focal mechanisms, with their corresponding reverse polarity events are dominant. Our results have significant implications in our understanding of intermedaite-depth earthquakes and the stress conditions that are responsible for this unusual cluster of seismicity.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
NASA Astrophysics Data System (ADS)
Chakraborty, Suman; Chakrabarti, Sandip Kumar; Sasmal, Sudipta
2016-07-01
An important channel of the lithosphere-atmosphere-ionosphere coupling (LAIC) is the acoustic and gravity wave channel where the atmospheric gravity waves (AGW) play the most important part. Atmospheric waves are excited due to seismic gravitational vibrations before earthquakes and their effects on the atmosphere are the sources for seismo-ionospheric coupling which are manifested as perturbations in Very Low Frequency (VLF)/Low Frequency (LF) signal (amplitude/phase). For our study, we chose the recent major earthquakes that took place in Nepal and Imphal. The Nepal earthquake occurred on 12th May, 2015 at 12:50 pm local time (07:05 UTC) with Richter scale magnitude of M = 7.3 and depth 10 km (6.21 miles) at southeast of Kodari. The Imphal earthquake occurred on 4th January, 2016 at 4:35 am local time (23:05 UTC , 3rd January, UTC) with Richter scale magnitude of M = 6.7 and depth 55 km (34.2 miles). The data has been collected from Ionospheric and Earthquake Research Centre (IERC) of Indian Centre for Space Physics (ICSP) transmitted from JJI station of Japan. We performed both Fast Fourier Transform (FFT) and wavelet analysis on the VLF data for a couple of days before and after the major earthquakes. For both earthquakes, we observed wave like structures with periods of almost an hour before and after the earthquake day. The wave like oscillations after the earthquake may be due to the aftershock effects. We also observed that the amplitude of the wave like structures depends on the location of the epicenter between the transmitting and the receiving points and also on the depth of the earthquake.
Recurrent slow slip event likely hastened by the 2011 Tohoku earthquake
Hirose, Hitoshi; Kimura, Hisanori; Enescu, Bogdan; Aoi, Shin
2012-01-01
Slow slip events (SSEs) are another mode of fault deformation than the fast faulting of regular earthquakes. Such transient episodes have been observed at plate boundaries in a number of subduction zones around the globe. The SSEs near the Boso Peninsula, central Japan, are among the most documented SSEs, with the longest repeating history, of almost 30 y, and have a recurrence interval of 5 to 7 y. A remarkable characteristic of the slow slip episodes is the accompanying earthquake swarm activity. Our stable, long-term seismic observations enable us to detect SSEs using the recorded earthquake catalog, by considering an earthquake swarm as a proxy for a slow slip episode. Six recurrent episodes are identified in this way since 1982. The average duration of the SSE interoccurrence interval is 68 mo; however, there are significant fluctuations from this mean. While a regular cycle can be explained using a simple physical model, the mechanisms that are responsible for the observed fluctuations are poorly known. Here we show that the latest SSE in the Boso Peninsula was likely hastened by the stress transfer from the March 11, 2011 great Tohoku earthquake. Moreover, a similar mechanism accounts for the delay of an SSE in 1990 by a nearby earthquake. The low stress buildups and drops during the SSE cycle can explain the strong sensitivity of these SSEs to stress transfer from external sources. PMID:22949688
Transparent Global Seismic Hazard and Risk Assessment
NASA Astrophysics Data System (ADS)
Smolka, Anselm; Schneider, John; Pinho, Rui; Crowley, Helen
2013-04-01
Vulnerability to earthquakes is increasing, yet advanced reliable risk assessment tools and data are inaccessible to most, despite being a critical basis for managing risk. Also, there are few, if any, global standards that allow us to compare risk between various locations. The Global Earthquake Model (GEM) is a unique collaborative effort that aims to provide organizations and individuals with tools and resources for transparent assessment of earthquake risk anywhere in the world. By pooling data, knowledge and people, GEM acts as an international forum for collaboration and exchange, and leverages the knowledge of leading experts for the benefit of society. Sharing of data and risk information, best practices, and approaches across the globe is key to assessing risk more effectively. Through global projects, open-source IT development and collaborations with more than 10 regions, leading experts are collaboratively developing unique global datasets, best practice, open tools and models for seismic hazard and risk assessment. Guided by the needs and experiences of governments, companies and citizens at large, they work in continuous interaction with the wider community. A continuously expanding public-private partnership constitutes the GEM Foundation, which drives the collaborative GEM effort. An integrated and holistic approach to risk is key to GEM's risk assessment platform, OpenQuake, that integrates all above-mentioned contributions and will become available towards the end of 2014. Stakeholders worldwide will be able to calculate, visualise and investigate earthquake risk, capture new data and to share their findings for joint learning. Homogenized information on hazard can be combined with data on exposure (buildings, population) and data on their vulnerability, for loss assessment around the globe. Furthermore, for a true integrated view of seismic risk, users can add social vulnerability and resilience indices to maps and estimate the costs and benefits of different risk management measures. The following global data, models and methodologies will be available in the platform. Some of these will be released to the public already before, such as the ISC-GEM global instrumental catalogue (released January 2013). Datasets: • Global Earthquake History Catalogue [1000-1903] • Global Instrumental Catalogue [1900-2009] • Global Geodetic Strain Rate Model • Global Active Fault Database • Tectonic Regionalisation • Buildings and Population Database • Earthquake Consequences Database • Physical Vulnerability Database • Socio-Economic Vulnerability and Resilience Indicators Models: • Seismic Source Models • Ground Motion (Attenuation) Models • Physical Exposure Models • Physical Vulnerability Models • Composite Index Models (social vulnerability, resilience, indirect loss) The aforementioned models developed under the GEM framework will be combined to produce estimates of hazard and risk at a global scale. Furthermore, building on many ongoing efforts and knowledge of scientists worldwide, GEM will integrate state-of-the-art data, models, results and open-source tools into a single platform that is to serve as a "clearinghouse" on seismic risk. The platform will continue to increase in value, in particular for use in local contexts, through contributions and collaborations with scientists and organisations worldwide.
Non-double-couple earthquakes. 1. Theory
Julian, B.R.; Miller, A.D.; Foulger, G.R.
1998-01-01
Historically, most quantitative seismological analyses have been based on the assumption that earthquakes are caused by shear faulting, for which the equivalent force system in an isotropic medium is a pair of force couples with no net torque (a 'double couple,' or DC). Observations of increasing quality and coverage, however, now resolve departures from the DC model for many earthquakes and find some earthquakes, especially in volcanic and geothermal areas, that have strongly non-DC mechanisms. Understanding non-DC earthquakes is important both for studying the process of faulting in detail and for identifying nonshear-faulting processes that apparently occur in some earthquakes. This paper summarizes the theory of 'moment tensor' expansions of equivalent-force systems and analyzes many possible physical non-DC earthquake processes. Contrary to long-standing assumption, sources within the Earth can sometimes have net force and torque components, described by first-rank and asymmetric second-rank moment tensors, which must be included in analyses of landslides and some volcanic phenomena. Non-DC processes that lead to conventional (symmetric second-rank) moment tensors include geometrically complex shear faulting, tensile faulting, shear faulting in an anisotropic medium, shear faulting in a heterogeneous region (e.g., near an interface), and polymorphic phase transformations. Undoubtedly, many non-DC earthquake processes remain to be discovered. Progress will be facilitated by experimental studies that use wave amplitudes, amplitude ratios, and complete waveforms in addition to wave polarities and thus avoid arbitrary assumptions such as the absence of volume changes or the temporal similarity of different moment tensor components.
Near-Source Shaking and Dynamic Rupture in Plastic Media
NASA Astrophysics Data System (ADS)
Gabriel, A.; Mai, P. M.; Dalguer, L. A.; Ampuero, J. P.
2012-12-01
Recent well recorded earthquakes show a high degree of complexity at the source level that severely affects the resulting ground motion in near and far-field seismic data. In our study, we focus on investigating source-dominated near-field ground motion features from numerical dynamic rupture simulations in an elasto-visco-plastic bulk. Our aim is to contribute to a more direct connection from theoretical and computational results to field and seismological observations. Previous work showed that a diversity of rupture styles emerges from simulations on faults governed by velocity-and-state-dependent friction with rapid velocity-weakening at high slip rate. For instance, growing pulses lead to re-activation of slip due to gradual stress build-up near the hypocenter, as inferred in some source studies of the 2011 Tohoku-Oki earthquake. Moreover, off-fault energy dissipation implied physical limits on extreme ground motion by limiting peak slip rate and rupture velocity. We investigate characteristic features in near-field strong ground motion generated by dynamic in-plane rupture simulations. We present effects of plasticity on source process signatures, off-fault damage patterns and ground shaking. Independent of rupture style, asymmetric damage patterns across the fault are produced that contribute to the total seismic moment, and even dominantly at high angles between the fault and the maximum principal background stress. The off-fault plastic strain fields induced by transitions between rupture styles reveal characteristic signatures of the mechanical source processes during the transition. Comparing different rupture styles in elastic and elasto-visco-plastic media to identify signatures of off-fault plasticity, we find varying degrees of alteration of near-field radiation due to plastic energy dissipation. Subshear pulses suffer more peak particle velocity reduction due to plasticity than cracks. Supershear ruptures are affected even more. The occurrence of multiple rupture fronts affect seismic potency release rate, amplitude spectra, peak particle velocity distributions and near-field seismograms. Our simulations enable us to trace features of source processes in synthetic seismograms, for example exhibiting a re-activation of slip. Such physical models may provide starting points for future investigations of field properties of earthquake source mechanisms and natural fault conditions. In the long-term, our findings may be helpful for seismic hazard analysis and the improvement of seismic source models.
Slow Earthquakes in the Alaska-Aleutian Subduction Zone Detected by Multiple Mini Seismic Arrays
NASA Astrophysics Data System (ADS)
LI, B.; Ghosh, A.; Thurber, C. H.; Lanza, F.
2017-12-01
The Alaska-Aleutian subduction zone is one of the most seismically and volcanically active plate boundaries on earth. Compared to other subduction zones, the slow earthquakes, such as tectonic tremors (TTs) and low frequency earthquakes (LFEs), are relatively poorly studied due to the limited data availability and difficult logistics. The analysis of two-months of continuous data from a mini array deployed in 2012 shows abundant tremor and LFE activities under Unalaska Island that is heterogeneously distributed [Li & Ghosh, 2017]. To better study slow earthquakes and understand their physical characteristics in the study region, we deployed a hybrid array of arrays, consisting of three well-designed mini seismic arrays and five stand alone stations, in the Unalaska Island in 2014. They were operational for between one and two years. Using the beam back-projection method [Ghosh et al., 2009, 2012], we detect continuous tremor activities for over a year when all three arrays are running. The sources of tremors are located south of the Unalaska and Akutan Islands, at the eastern and down-dip edge of the rupture zone of the 1957 Mw 8.6 earthquake, and they are clustered in several patches, with a gap between the two major clusters. Tremors show multiple migration patterns with propagation in both along-strike and dip directions and a wide range of velocities. We also identify tens of LFE families and use them as templates to search for repeating LFE events with the matched-filter method. Hundreds to thousands of LFEs for each family are detected and their activities are spatiotemporally consistent with tremor activities. The array techniques are revealing a near-continuous tremor activity in this area with remarkable spatiotemporal details. It helps us to better recognize the physical properties of the transition zone, provides new insights into the slow earthquake activities in this area, and explores their relation with the local earthquakes and the potential slow slip events.
Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations
NASA Astrophysics Data System (ADS)
Weng, H.; Yang, H.
2017-12-01
Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.
Yellowstone volcano-tectonic microseismic cycles constrain models of migrating volcanic fluids
NASA Astrophysics Data System (ADS)
Massin, F.; Farrell, J.; Smith, R. B.
2011-12-01
The objective of our research is to evaluate the source properties of extensive earthquake swarms in and around the 0.64Myr Yellowstone caldera, Yellowstone National Park, that is also the locus of widespread hydrothermal activity and ground deformation. We use earthquake waveforms data to investigate seismic wave multiplets that occur within discrete earthquake sequences. Waveform cross-correlation coefficients are computed from data acquired at six high quality stations that are merged from data of identical earthquakes into multiplets. Multiplets provide important indicators on the rupture process of the distinct seismogenic structures. Our multiplet database allowed evaluation of the seismic-source chronology from 1992 to 2010. We assess the evolution of micro-earthquake triggering by evaluating the evolution of earthquake rates and magnitudes. Some striking differences appear between two kinds of seismic swarms: 1) swarms with a high rate of repeating earthquakes of more than 200 events per day, and 2) swarms with a low rate of repeating earthquakes (less than 20 events per day). The 2010 Madison Plateau, western caldera, and the 2008-2009 Yellowstone Lake, eastern caldera, earthquake swarms are two examples representing respectively cascading relaxation of a uniform stress, and an example of highly concentrated stress perturbation induced by a migrating material. The repeating earthquake pattern methodology was then used to characterize the composition of the migrating material by modelling the migration time-space pattern with a experimental thermo-physical simulations of solidification of a fluid filled propagating dike. Comparison of our results with independent GPS deformation data suggests a most-likely model of rhyolitic-granitic magma intrusion along a vertical dike outlined by the pattern of earthquakes. The magma-hydrothermal mix was modeled with a temperature of 800°C-900°C and an average volumetric injection flux between 1.5 and 5 m3/s. Our interpretation is that the Yellowstone Lake swarm was caused by magma and hydrothermal fluids migrating laterally, at 1000 m per day, from ~12 km to 2 km depth and with the pattern of earthquake nucleation from south to north. The causative magmatic fluid came within a few km but did not reach the Earth's surface because of its low density contrast with the host rock. We also used multiplets for precise earthquake relocation using the P- and S-wave three-dimensional velocity models established previously for Yellowstone. Most of the repeating earthquakes are located in the northwestern part of the caldera and in the Hebgen Lake fault system, west of the caldera, that appear as the most active multiplet generator in Yellowstone. We are also evaluating multiplets for earthquake focal mechanism determinations and magmatic source property studies. The abnormal multiplets-triggering zone around the Hebgen Lake fault system, for example is also a research focus for multiplet stress simulation and we will present results on how multiplets can be used to investigate the volcano-tectonic stress interactions between the pre existing ~ 15 My Basin and Range normal faults and the superimposed effects of the 2 Mr Yellowstone volcanism on the pre-existing structures.
Atmospheric Signals Associated with Major Earthquakes. A Multi-Sensor Approach. Chapter 9
NASA Technical Reports Server (NTRS)
Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Kafatos, Menas; Taylor, Patrick
2011-01-01
We are studying the possibility of a connection between atmospheric observation recorded by several ground and satellites as earthquakes precursors. Our main goal is to search for the existence and cause of physical phenomenon related to prior earthquake activity and to gain a better understanding of the physics of earthquake and earthquake cycles. The recent catastrophic earthquake in Japan in March 2011 has provided a renewed interest in the important question of the existence of precursory signals preceding strong earthquakes. We will demonstrate our approach based on integration and analysis of several atmospheric and environmental parameters that were found associated with earthquakes. These observations include: thermal infrared radiation, radon! ion activities; air temperature and humidity and a concentration of electrons in the ionosphere. We describe a possible physical link between atmospheric observations with earthquake precursors using the latest Lithosphere-Atmosphere-Ionosphere Coupling model, one of several paradigms used to explain our observations. Initial results for the period of2003-2009 are presented from our systematic hind-cast validation studies. We present our findings of multi-sensor atmospheric precursory signals for two major earthquakes in Japan, M6.7 Niigata-ken Chuetsu-oki of July16, 2007 and the latest M9.0 great Tohoku earthquakes of March 11,2011
Rapid determination of the energy magnitude Me
NASA Astrophysics Data System (ADS)
di Giacomo, D.; Parolai, S.; Bormann, P.; Saul, J.; Grosser, H.; Wang, R.; Zschau, J.
2009-04-01
The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake's damage potential. However, many magnitude scales developed over the past years have different meanings. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy ES (e.g. Bormann et al., 2002): Me = 2/3(log10 ES - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. The calculation of ES requires the integration over frequency of the squared P-waves velocity spectrum corrected for the energy loss experienced by the seismic waves along the path from the source to the receivers. To accout for the frequency-dependent energy loss, we computed spectral amplitude decay functions for different frequenciesby using synthetic Green's functions (Wang, 1999) based on the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We analyse teleseismic broadband P-waves signals in the distance range 20°-98°. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake's origin time. Indeed, we use time variable cumulative energy windows starting 4 s after the first P-wave arrival in order to include the earthquake rupture duration, which is calculated according to Bormann and Saul (2008). We tested our procedure for a large dataset composed by about 750 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of the seismic source, and a combined use of these two magnitude scales would allow a better assessment of the tsunami and shaking potential of an earthquake.. References Bormann, P., Baumbach, M., Bock, G., Grosser, H., Choy, G. L., and Boatwright, J. (2002). Seismic sources and source parameters, in IASPEI New Manual of Seismological Observatory Practice, P. Bormann (Editor), Vol. 1, GeoForschungsZentrum, Potsdam, Chapter 3, 1-94. Bormann, P., and Saul, J. (2008). The new IASPEI standard broadband magnitude mB. Seism. Res. Lett., 79(5), 699-705. Choy, G. L., and Kirby, S. (2004). Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones. Geophys. J. Int., 159, 991-1012. Kennett, B. L. N., Engdahl, E. R., and Buland, R. (1995). Constraints on seismic velocities in the Earth from traveltimes. Geophys. J. Int., 122, 108-124. Montagner, J.-P., and Kennett, B. L. N. (1996). How to reconcile body-wave and normal-mode reference Earth models?. Geophys. J. Int., 125, 229-248. Wang, R. (1999). A simple orthonormalization method for stable and efficient computation of Green's functions. Bull. Seism. Soc. Am., 89(3), 733-741.
Tsunami Generation Modelling for Early Warning Systems
NASA Astrophysics Data System (ADS)
Annunziato, A.; Matias, L.; Ulutas, E.; Baptista, M. A.; Carrilho, F.
2009-04-01
In the frame of a collaboration between the European Commission Joint Research Centre and the Institute of Meteorology in Portugal, a complete analytical tool to support Early Warning Systems is being developed. The tool will be part of the Portuguese National Early Warning System and will be used also in the frame of the UNESCO North Atlantic Section of the Tsunami Early Warning System. The system called Tsunami Analysis Tool (TAT) includes a worldwide scenario database that has been pre-calculated using the SWAN-JRC code (Annunziato, 2007). This code uses a simplified fault generation mechanism and the hydraulic model is based on the SWAN code (Mader, 1988). In addition to the pre-defined scenario, a system of computers is always ready to start a new calculation whenever a new earthquake is detected by the seismic networks (such as USGS or EMSC) and is judged capable to generate a Tsunami. The calculation is performed using minimal parameters (epicentre and the magnitude of the earthquake): the programme calculates the rupture length and rupture width by using empirical relationship proposed by Ward (2002). The database calculations, as well the newly generated calculations with the current conditions are therefore available to TAT where the real online analysis is performed. The system allows to analyze also sea level measurements available worldwide in order to compare them and decide if a tsunami is really occurring or not. Although TAT, connected with the scenario database and the online calculation system, is at the moment the only software that can support the tsunami analysis on a global scale, we are convinced that the fault generation mechanism is too simplified to give a correct tsunami prediction. Furthermore short tsunami arrival times especially require a possible earthquake source parameters data on tectonic features of the faults like strike, dip, rake and slip in order to minimize real time uncertainty of rupture parameters. Indeed the earthquake parameters available right after an earthquake are preliminary and could be inaccurate. Determining which earthquake source parameters would affect the initial height and time series of tsunamis will show the sensitivity of the tsunami time series to seismic source details. Therefore a new fault generation model will be adopted, according to the seismotectonics properties of the different regions, and finally included in the calculation scheme. In order to do this, within the collaboration framework of Portuguese authorities, a new model is being defined, starting from the seismic sources in the North Atlantic, Caribbean and Gulf of Cadiz. As earthquakes occurring in North Atlantic and Caribbean sources may affect Portugal mainland, the Azores and Madeira archipelagos also these sources will be included in the analysis. Firstly we have started to examine the geometries of those sources that spawn tsunamis to understand the effect of fault geometry and depths of earthquakes. References: Annunziato, A., 2007. The Tsunami Assesment Modelling System by the Joint Research Center, Science of Tsunami Hazards, Vol. 26, pp. 70-92. Mader, C.L., 1988. Numerical modelling of water waves, University of California Press, Berkeley, California. Ward, S.N., 2002. Tsunamis, Encyclopedia of Physical Science and Technology, Vol. 17, pp. 175-191, ed. Meyers, R.A., Academic Press.
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira
2012-01-01
We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change much for earthquakes rupturing in the same area. The occurrence of microearthquakes in the interseismic period suggests the intrusion of aseismic slip, causing a loading of these patches. We also found that some earthquakes near the centre of the mainshock source area occurred just after the earthquakes at the deeper edge of the mainshock source area. These seismic activities probably indicate episodic aseismic slip migrating from the deeper regions in the mainshock asperity to its centre during interseismic periods. Comparison of the source parameters for the 2001 and 2008 main shocks shows that the seismic moments (1.04 x 1016 Nm and 1.12 x 1016 Nm for the 2008 and 2001 earthquakes, respectively) and source sizes (radius = 570 m and 540 m for the 2008 and 2001 earthquakes, respectively) are comparable. Based on careful phase identification and hypocentre relocation by constraining the hypocentres of other small earthquakes to their precisely located centroids, we found that the hypocentres of the 2001 and 2008 M~ 4.9 events are located in the southeastern part of the mainshock source area. This location does not correspond to either episodic slip area or hypocentres of small earthquakes that occurred during the earthquake cycle.
Intraplate earthquakes and the state of stress in oceanic lithosphere
NASA Technical Reports Server (NTRS)
Bergman, Eric A.
1986-01-01
The dominant sources of stress relieved in oceanic intraplate earthquakes are investigated to examine the usefulness of earthquakes as indicators of stress orientation. The primary data for this investigation are the detailed source studies of 58 of the largest of these events, performed with a body-waveform inversion technique of Nabelek (1984). The relationship between the earthquakes and the intraplate stress fields was investigated by studying, the rate of seismic moment release as a function of age, the source mechanisms and tectonic associations of larger events, and the depth-dependence of various source parameters. The results indicate that the earthquake focal mechanisms are empirically reliable indicators of stress, probably reflecting the fact that an earthquake will occur most readily on a fault plane oriented in such a way that the resolved shear stress is maximized while the normal stress across the fault, is minimized.
Toward standardization of slow earthquake catalog -Development of database website-
NASA Astrophysics Data System (ADS)
Kano, M.; Aso, N.; Annoura, S.; Arai, R.; Ito, Y.; Kamaya, N.; Maury, J.; Nakamura, M.; Nishimura, T.; Obana, K.; Sugioka, H.; Takagi, R.; Takahashi, T.; Takeo, A.; Yamashita, Y.; Matsuzawa, T.; Ide, S.; Obara, K.
2017-12-01
Slow earthquakes have now been widely discovered in the world based on the recent development of geodetic and seismic observations. Many researchers detect a wide frequency range of slow earthquakes including low frequency tremors, low frequency earthquakes, very low frequency earthquakes and slow slip events by using various methods. Catalogs of the detected slow earthquakes are open to us in different formats by each referring paper or through a website (e.g., Wech 2010; Idehara et al. 2014). However, we need to download catalogs from different sources, to deal with unformatted catalogs and to understand the characteristics of different catalogs, which may be somewhat complex especially for those who are not familiar with slow earthquakes. In order to standardize slow earthquake catalogs and to make such a complicated work easier, Scientific Research on Innovative Areas "Science of Slow Earthquakes" has been developing a slow earthquake catalog website. In the website, we can plot locations of various slow earthquakes via the Google Maps by compiling a variety of slow earthquake catalogs including slow slip events. This enables us to clearly visualize spatial relations among slow earthquakes at a glance and to compare the regional activities of slow earthquakes or the locations of different catalogs. In addition, we can download catalogs in the unified format and refer the information on each catalog on the single website. Such standardization will make it more convenient for users to utilize the previous achievements and to promote research on slow earthquakes, which eventually leads to collaborations with researchers in various fields and further understanding of the mechanisms, environmental conditions, and underlying physics of slow earthquakes. Furthermore, we expect that the website has a leading role in the international standardization of slow earthquake catalogs. We report the overview of the website and the progress of construction. Acknowledgment: This work is supported by JSPS KAKENHI Grant Numbers JP16H06472, JP16H06473, JP16H06474, JP16H06477 in Scientific Research on Innovative Areas "Science of Slow Earthquakes", and JP15K17743 in Grant-in-Aid for Young Scientists (B).
An Earthquake Information Service with Free and Open Source Tools
NASA Astrophysics Data System (ADS)
Schroeder, M.; Stender, V.; Jüngling, S.
2015-12-01
At the GFZ German Research Centre for Geosciences in Potsdam, the working group Earthquakes and Volcano Physics examines the spatiotemporal behavior of earthquakes. In this context also the hazards of volcanic eruptions and tsunamis are explored. The aim is to collect related information after the occurrence of such extreme event and make them available for science and partly to the public as quickly as possible. However, the overall objective of this research is to reduce the geological risks that emanate from such natural hazards. In order to meet the stated objectives and to get a quick overview about the seismicity of a particular region and to compare the situation to historical events, a comprehensive visualization was desired. Based on the web-accessible data from the famous GFZ GEOFON network a user-friendly web mapping application was realized. Further, this web service integrates historical and current earthquake information from the USGS earthquake database, and more historical events from various other catalogues like Pacheco, International Seismological Centre (ISC) and more. This compilation of sources is unique in Earth sciences. Additionally, information about historical and current occurrences of volcanic eruptions and tsunamis are also retrievable. Another special feature in the application is the containment of times via a time shifting tool. Users can interactively vary the visualization by moving the time slider. Furthermore, the application was realized by using the newest JavaScript libraries which enables the application to run in all sizes of displays and devices. Our contribution will present the making of, the architecture behind, and few examples of the look and feel of this application.
NASA Astrophysics Data System (ADS)
Williams, J. R.; Hawthorne, J.; Rost, S.; Wright, T. J.
2017-12-01
Earthquakes on oceanic transform faults often show unusual behaviour. They tend to occur in swarms, have large numbers of foreshocks, and have high stress drops. We estimate stress drops for approximately 60 M > 4 earthquakes along the Blanco oceanic transform fault, a right-lateral fault separating the Juan de Fuca and Pacific plates offshore of Oregon. We find stress drops with a median of 4.4±19.3MPa and examine how they vary with earthquake moment. We calculate stress drops using a recently developed method based on inter-station phase coherence. We compare seismic records of co-located earthquakes at a range of stations. At each station, we apply an empirical Green's function (eGf) approach to remove phase path effects and isolate the relative apparent source time functions. The apparent source time functions at each earthquake should vary among stations at periods shorter than a P wave's travel time across the earthquake rupture area. Therefore we compute the rupture length of the larger earthquake by identifying the frequency at which the relative apparent source time functions start to vary among stations, leading to low inter-station phase coherence. We determine a stress drop from the rupture length and moment of the larger earthquake. Our initial stress drop estimates increase with increasing moment, suggesting that earthquakes on the Blanco fault are not self-similar. However, these stress drops may be biased by several factors, including depth phases, trace alignment, and source co-location. We find that the inclusion of depth phases (such as pP) in the analysis time window has a negligible effect on the phase coherence of our relative apparent source time functions. We find that trace alignment must be accurate to within 0.05 s to allow us to identify variations in the apparent source time functions at periods relevant for M > 4 earthquakes. We check that the alignments are accurate enough by comparing P wave arrival times across groups of earthquakes. Finally, we note that the eGf path effect removal will be unsuccessful if earthquakes are too far apart. We therefore calculate relative earthquake locations from our estimated differential P wave arrival times, then we examine how our stress drop estimates vary with inter-earthquake distance.
NASA Astrophysics Data System (ADS)
Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe
2017-01-01
A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic ground motions obtained using the EGF method agree well with the observed motions in terms of acceleration, velocity, and displacement within the frequency range of 0.3-10 Hz. These findings indicate that the 2016 Kumamoto earthquake is a standard event that follows the scaling relationship of crustal earthquakes in Japan.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhou, L.; Liu, J.
2013-12-01
Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity
A non extensive statistical physics analysis of the Hellenic subduction zone seismicity
NASA Astrophysics Data System (ADS)
Vallianatos, F.; Papadakis, G.; Michas, G.; Sammonds, P.
2012-04-01
The Hellenic subduction zone is the most seismically active region in Europe [Becker & Meier, 2010]. The spatial and temporal distribution of seismicity as well as the analysis of the magnitude distribution of earthquakes concerning the Hellenic subduction zone, has been studied using the concept of Non-Extensive Statistical Physics (NESP) [Tsallis, 1988 ; Tsallis, 2009]. Non-Extensive Statistical Physics, which is a generalization of Boltzmann-Gibbs statistical physics, seems a suitable framework for studying complex systems (Vallianatos, 2011). Using this concept, Abe & Suzuki (2003;2005) investigated the spatial and temporal properties of the seismicity in California and Japan and recently Darooneh & Dadashinia (2008) in Iran. Furthermore, Telesca (2011) calculated the thermodynamic parameter q of the magnitude distribution of earthquakes of the southern California earthquake catalogue. Using the external seismic zones of 36 seismic sources of shallow earthquakes in the Aegean and the surrounding area [Papazachos, 1990], we formed a dataset concerning the seismicity of shallow earthquakes (focal depth ≤ 60km) of the subduction zone, which is based on the instrumental data of the Geodynamic Institute of the National Observatory of Athens (http://www.gein.noa.gr/, period 1990-2011). The catalogue consists of 12800 seismic events which correspond to 15 polygons of the aforementioned external seismic zones. These polygons define the subduction zone, as they are associated with the compressional stress field which characterizes a subducting regime. For each event, moment magnitude was calculated from ML according to the suggestions of Papazachos et al. (1997). The cumulative distribution functions of the inter-event times and the inter-event distances as well as the magnitude distribution for each seismic zone have been estimated, presenting a variation in the q-triplet along the Hellenic subduction zone. The models used, fit rather well to the observed distributions, implying the complexity of the spatiotemporal properties of seismicity and the usefulness of NESP in investigating such phenomena, exhibiting scale-free nature and long range memory effects. Acknowledgments. This work was supported in part by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project entitled "Integrated understanding of Seismicity, using innovative Methodologies of Fracture mechanics along with Earthquake and non extensive statistical physics - Application to the geodynamic system of the Hellenic Arc. SEISMO FEAR HELLARC". GM and GP wish to acknowledge the partial support of the Greek State Scholarships Foundation (ΙΚΥ).
A rapid estimation of near field tsunami run-up
Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie
2015-01-01
Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.
NASA Astrophysics Data System (ADS)
Suzuki, K.; Kamiya, S.; Takahashi, N.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) installed DONET (Dense Oceanfloor Network System for Earthquakes and Tsunamis) off the Kii Peninsula, southwest of Japan, to monitor earthquakes and tsunamis. Stations of DONET1, which are distributed in Kumano-nada, and DONET2, which are distributed off Muroto, were installed by August 2011 and April 2016, respectively. After the installation of all of the 51 stations, DONET was transferred to National Research Institute for Earth Science and Disaster Resilience (NIED). NIED and JAMSTEC have now corroborated in the operation of DONET since April 2016. To investigate the seismicity around the source areas of the 1946 Nankai and the 1944 Tonankai earthquakes, we detected earthquakes from the records of the broadband seismometers installed to DONET. Because DONET stations are apart from land stations, we can detect smaller earthquakes than by using only land stations. It is important for understanding the stress state and seismogenic mechanism to monitoring the spatial-temporal seismicity change. In this study we purpose to evaluate to the seismicity around the source areas of the Nankai and the Tonankai earthquakes by using our earthquake catalogue. The frequency-magnitude relationships of earthquakes in the areas of DONET1&2 had an almost constant slope of about -1 for earthquakes of ML larger than 1.5 and 2.5, satisfying the Gutenberg-Richter law, and the slope of smaller earthquakes approached 0, reflecting the detection limits. While the most of the earthquakes occurred in the aftershock area of the 2004 off the Kii Peninsula earthquakes, very limited activity was detected in the source region of the Nankai and Tonankai earthquake except for the large earthquake (MJMA = 6.5) on 1st April 2016 and its aftershocks. We will evaluate the detection limit of the earthquake in more detail and investigate the spatial-temporal seismicity change with waiting the data store.
Tilt precursors before earthquakes on the San Andreas fault, California
Johnston, M.J.S.; Mortensen, C.E.
1974-01-01
An array of 14 biaxial shallow-borehole tiltmeters (at 10-7 radian sensitivity) has been installed along 85 kilometers of the San Andreas fault during the past year. Earthquake-related changes in tilt have been simultaneously observed on up to four independent instruments. At earthquake distances greater than 10 earthquake source dimensions, there are few clear indications of tilt change. For the four instruments with the longest records (>10 months), 26 earthquakes have occurred since July 1973 with at least one instrument closer than 10 source dimensions and 8 earthquakes with more than one instrument within that distance. Precursors in tilt direction have been observed before more than 10 earthquakes or groups of earthquakes, and no similar effect has yet been seen without the occurrence of an earthquake.
Uncertainty, variability, and earthquake physics in ground‐motion prediction equations
Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.
2017-01-01
Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20 km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.
NASA Astrophysics Data System (ADS)
Kun, C.
2015-12-01
Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.
Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.
2012-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or historical earthquake source area is also applicable for the construction of SMGA settings for strong ground motion prediction for future earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr; Türker, Tuğba, E-mail: tturker@ktu.edu.tr
The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Ağrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Ağrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Ağrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focalmore » mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Ağrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Ağrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158 years, 6.7 magnitude was in 70 years, 6.2 magnitude was in 31 years, 5.7 magnitude was in 13 years, 5.2 magnitude was in 6 years.« less
NASA Astrophysics Data System (ADS)
Bayrak, Yusuf; Türker, Tuǧba
2016-04-01
The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Aǧrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Aǧrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Aǧrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focal mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Aǧrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Aǧrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158 years, 6.7 magnitude was in 70 years, 6.2 magnitude was in 31 years, 5.7 magnitude was in 13 years, 5.2 magnitude was in 6 years.
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.
2011-12-01
Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very practical purpose of enabling earthquake early warning and greatly improving rapid finite-fault source modeling. Multiple uses of the effectively very broad-band data obtained by these stations, for operational and research purposes, are bound to occur especially because all data will be freely, openly and instantly available.
NASA Astrophysics Data System (ADS)
Allstadt, Kate
The following work is focused on the use of both traditional and novel seismological tools, combined with concepts from other disciplines, to investigate shallow seismic sources and hazards. The study area is the dynamic landscape of the Pacific Northwest and its wide-ranging earthquake, landslide, glacier, and volcano-related hazards. The first chapter focuses on landsliding triggered by earthquakes, with a shallow crustal earthquake in Seattle as a case study. The study demonstrates that utilizing broadband synthetic seismograms and rigorously incorporating 3D basin amplification, 1D site effects, and fault directivity, allows for a more complete assessment of regional seismically induced landslide hazard. The study shows that the hazard is severe for Seattle, and provides a framework for future probabilistic maps and near real-time hazard assessment. The second chapter focuses on landslides that generate seismic waves and how these signals can be harnessed to better understand landslide dynamics. This is demonstrated using two contrasting Pacific Northwest landslides. The 2010 Mount Meager, BC, landslide generated strong long period waves. New full waveform inversion methods reveal the time history of forces the landslide exerted on the earth that is used to quantify event dynamics. Despite having a similar volume (˜107 m3), The 2009 Nile Valley, WA, landslide did not generate observable long period motions because of its smaller accelerations, but pulses of higher frequency waves were valuable in piecing together the complex sequence of events. The final chapter details the difficulties of monitoring glacier-clad volcanoes. The focus is on small, repeating, low-frequency earthquakes at Mount Rainier that resemble volcanic earthquakes. However, based on this investigation, they are actually glacial in origin: most likely stick-slip sliding of glaciers triggered by snow loading. Identification of the source offers a view of basal glacier processes, discriminates against alarming volcanic noises, and has implications for repeating earthquakes in tectonic environments. This body of work demonstrates that by combining methods and concepts from seismology and other disciplines in new ways, we can obtain a better understanding and a fresh perspective of the physics behind the shallow seismic sources and hazards that threaten the Pacific Northwest.
NASA Astrophysics Data System (ADS)
Gümüş, Ayla; Yalım, Hüseyin Ali
2018-02-01
Radon emanation occurs all the rocks and earth containing uranium element. Anomalies in radon concentrations before earthquakes are observed in fault lines, geothermal sources, uranium deposits, volcanic movements. The aim of this study is to investigate the relationship between the radon anomalies in water resources and the radial distances of the sources to the earthquake center. For this purpose, radon concentrations of 9 different deep water sources near Akşehir fault line were determined by taking samples with monthly periods for two years. The relationship between the radon anomalies and the radial distances of the sources to the earthquake center was obtained for the sources.
Discrimination between pre-seismic electromagnetic anomalies and solar activity effects
NASA Astrophysics Data System (ADS)
Koulouras, G.; Balasis, G.; Kiourktsidis, I.; Nannos, E.; Kontakos, K.; Stonham, J.; Ruzhin, Y.; Eftaxias, K.; Cavouras, D.; Nomicos, C.
2009-04-01
Laboratory studies suggest that electromagnetic emissions in a wide frequency spectrum ranging from kilohertz (kHz) to very high megahertz (MHz) frequencies are produced by the opening of microcracks, with the MHz radiation appearing earlier than the kHz radiation. Earthquakes are large-scale fracture phenomena in the Earth's heterogeneous crust. Thus, the radiated kHz-MHz electromagnetic emissions are detectable not only in the laboratory but also at a geological scale. Clear MHz-to-kHz electromagnetic anomalies have been systematically detected over periods ranging from a few days to a few hours prior to recent destructive earthquakes in Greece. We should bear in mind that whether electromagnetic precursors to earthquakes exist is an important question not only for earthquake prediction but mainly for understanding the physical processes of earthquake generation. An open question in this field of research is the classification of a detected electromagnetic anomaly as a pre-seismic signal associated with earthquake occurrence. Indeed, electromagnetic fluctuations in the frequency range of MHz are known to be related to a few sources, including atmospheric noise (due to lightning), man-made composite noise, solar-terrestrial noise (resulting from the Sun-solar wind-magnetosphere-ionosphere-Earth's surface chain) or cosmic noise, and finally, the lithospheric effect, namely pre-seismic activity. We focus on this point in this paper. We suggest that if a combination of detected kHz and MHz electromagnetic anomalies satisfies the set of criteria presented herein, these anomalies could be considered as candidate precursory phenomena of an impending earthquake.
Discrimination between preseismic electromagnetic anomalies and solar activity effects
NASA Astrophysics Data System (ADS)
Koulouras, Gr; Balasis, G.; Kontakos, K.; Ruzhin, Y.; Avgoustis, G.; Kavouras, D.; Nomicos, C.
2009-04-01
Laboratory studies suggest that electromagnetic emissions in a wide frequency spectrum ranging from kHz to very high MHz frequencies are produced by the opening of microcracks, with the MHz radiation appearing earlier than the kHz radiation. Earthquakes are large-scale fracture phenomena in the Earth's heterogeneous crust. Thus, the radiated kHz-MHz electromagnetic emissions are detectable not only at laboratory but also at geological scale. Clear MHz-to-kHz electromagnetic anomalies have been systematically detected over periods ranging from a few days to a few hours prior to recent destructive earthquakes in Greece. We bear in mind that whether electromagnetic precursors to earthquakes exist is an important question not only for earthquake prediction but mainly for understanding the physical processes of earthquake generation. An open question in this field of research is the classification of a detected electromagnetic anomaly as a pre-seismic signal associated to earthquake occurrence. Indeed, electromagnetic fluctuations in the frequency range of MHz are known to related to a few sources, i.e., they might be atmospheric noise (due to lightning), man-made composite noise, solar-terrestrial noise (resulting from the Sun-solar wind-magnetosphere-ionosphere-Earth's surface chain) or cosmic noise, and finally, lithospheric effect, namely pre-seismic activity. We focus on this point. We suggest that if a combination of detected kHz and MHz electromagnetic anomalies satisfies the herein presented set of criteria these anomalies could be considered as candidate precursory phenomena of an impending earthquake.
NASA Astrophysics Data System (ADS)
Arapostathis, Stathis; Parcharidis, Isaak; Kalogeras, Ioannis; Drakatos, George
2015-04-01
In this paper we present an innovative approach for the development of seismic intensity maps in minimum time frame. As case study, a recent earthquake that occurred in Western Greece (Kefallinia Island, on February 26, 2014) is used. The magnitude of the earthquake was M=5.9 (Institute of Geodynamics - National Observatory of Athens). Earthquake's effects comprising damages in property and changes of the physical environment in the area. The innovative part of this research is that we use crowdsourcing as a source to assess macroseismic intensity information, coming out from twitter content. Twitter as a social media service with micro-blogging characteristics, a semantic structure which allows the storage of spatial content, and a high volume production of user generated content is a suitable source to obtain and extract knowledge related to macroseismic intensity in different geographic areas and in short time periods. Moreover the speed in which twitter content is generated affects us to have accurate results only a few hours after the occurrence of the earthquake. The method used in order to extract, evaluate and map the intensity related information is described in brief in this paper. At first, we pick out all the tweets that have been posted within the first 48 hours, including information related to intensity and refer to a geographic location. The geo-referencing of these tweets and their association with an intensity grade according to the European Macroseismic Scale (EMS98) based on the information they contain in text followed. Finally, we apply various spatial statistics and GIS methods, and we interpolate the values to cover all the appropriate geographic areas. The final output contains macroseismic intensity maps for the Lixouri area (Kefallinia Island), produced from twitter data that have been posted in the first six, twelve, twenty four and forty eight hours after the earthquake occurrence. Results are compared with other intensity maps for same earthquake, which have been published by other institutions around the world, as well as with previous earthquake isoseismal maps for the same area.
NASA Astrophysics Data System (ADS)
Marc, O.; Hovius, N.; Meunier, P.; Rault, C.
2017-12-01
In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and this model predicts that only earthquakes generated on a narrow range of fault sizes may cause more erosion than uplift (Marc et al., 2016b), while very large earthquakes are expected to always build topography. The model could also be used to physically calibrate hillslope erosion or perturbations to river network within landscape evolution model.
Regional Earthquake Shaking and Loss Estimation
NASA Astrophysics Data System (ADS)
Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.
2009-04-01
This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.
Who cares about Mid-Ocean Ridge Earthquakes? And Why?
NASA Astrophysics Data System (ADS)
Tolstoy, M.
2004-12-01
Every day the surface of our planet is being slowly ripped apart by the forces of plate tectonics. Much of this activity occurs underwater and goes unnoticed except for by a few marine seismologists who avidly follow the creaks and groans of the ocean floor in an attempt to understand the spreading and formation of oceanic crust. Are marine seismologists really the only ones that care? As it turns out, deep beneath the ocean surface, earthquakes play a fundamental role in a myriad of activity centered on mid-ocean ridges where new crust forms and breaks on a regular basis. This activity takes the form of exotic geological structures hosting roasting hot fluids and bizarre chemosynthetic life forms. One of the fundamental drivers for this other world on the seafloor is earthquakes. Earthquakes provide cracks that allow seawater to penetrate the rocks, heat up, and resurface as hydrothermal vent fluids, thus providing chemicals to feed a thriving biological community. Earthquakes can cause pressure changes along cracks that can fundamentally alter fluid flow rates and paths. Thus earthquakes can both cut off existing communities from their nutrient source and provide new oases on the seafloor around which life can thrive. This poster will present some of the fundamental physical principals of how earthquakes can impact fluid flow, and hence life on the seafloor. Using these other-wordly landscapes and alien-like life forms to woe the unsuspecting passerby, we will sneak geophysics into the picture and tell the story of why earthquakes are so fundamental to life on the seafloor, and perhaps life elsewhere in the universe.
NASA Astrophysics Data System (ADS)
Yoshida, Keisuke; Saito, Tatsuhiko; Urata, Yumi; Asano, Youichi; Hasegawa, Akira
2017-12-01
In this study, we investigated temporal variations in stress drop and b-value in the earthquake swarm that occurred at the Yamagata-Fukushima border, NE Japan, after the 2011 Tohoku-Oki earthquake. In this swarm, frictional strengths were estimated to have changed with time due to fluid diffusion. We first estimated the source spectra for 1,800 earthquakes with 2.0 ≤ MJMA < 3.0, by correcting the site-amplification and attenuation effects determined using both S waves and coda waves. We then determined corner frequency assuming the omega-square model and estimated stress drop for 1,693 earthquakes. We found that the estimated stress drops tended to have values of 1-4 MPa and that stress drops significantly changed with time. In particular, the estimated stress drops were very small at the beginning, and increased with time for 50 days. Similar temporal changes were obtained for b-value; the b-value was very high (b 2) at the beginning, and decreased with time, becoming approximately constant (b 1) after 50 days. Patterns of temporal changes in stress drop and b-value were similar to the patterns for frictional strength and earthquake occurrence rate, suggesting that the change in frictional strength due to migrating fluid not only triggered the swarm activity but also affected earthquake and seismicity characteristics. The estimated high Q-1 value, as well as the hypocenter migration, supports the presence of fluid, and its role in the generation and physical characteristics of the swarm.
Real-Time Earthquake Monitoring with Spatio-Temporal Fields
NASA Astrophysics Data System (ADS)
Whittier, J. C.; Nittel, S.; Subasinghe, I.
2017-10-01
With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.
Updating the USGS seismic hazard maps for Alaska
Mueller, Charles; Briggs, Richard; Wesson, Robert L.; Petersen, Mark D.
2015-01-01
The U.S. Geological Survey makes probabilistic seismic hazard maps and engineering design maps for building codes, emergency planning, risk management, and many other applications. The methodology considers all known earthquake sources with their associated magnitude and rate distributions. Specific faults can be modeled if slip-rate or recurrence information is available. Otherwise, areal sources are developed from earthquake catalogs or GPS data. Sources are combined with ground-motion estimates to compute the hazard. The current maps for Alaska were developed in 2007, and included modeled sources for the Alaska-Aleutian megathrust, a few crustal faults, and areal seismicity sources. The megathrust was modeled as a segmented dipping plane with segmentation largely derived from the slip patches of past earthquakes. Some megathrust deformation is aseismic, so recurrence was estimated from seismic history rather than plate rates. Crustal faults included the Fairweather-Queen Charlotte system, the Denali–Totschunda system, the Castle Mountain fault, two faults on Kodiak Island, and the Transition fault, with recurrence estimated from geologic data. Areal seismicity sources were developed for Benioff-zone earthquakes and for crustal earthquakes not associated with modeled faults. We review the current state of knowledge in Alaska from a seismic-hazard perspective, in anticipation of future updates of the maps. Updated source models will consider revised seismicity catalogs, new information on crustal faults, new GPS data, and new thinking on megathrust recurrence, segmentation, and geometry. Revised ground-motion models will provide up-to-date shaking estimates for crustal earthquakes and subduction earthquakes in Alaska.
High Attenuation Rate for Shallow, Small Earthquakes in Japan
NASA Astrophysics Data System (ADS)
Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe
2017-09-01
We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.
Kroll, Kayla A.; Cochran, Elizabeth S.; Murray, Kyle E.
2017-01-01
The Arbuckle Group (Arbuckle) is a basal sedimentary unit that is the primary target for saltwater disposal in Oklahoma. Thus, the reservoir characteristics of the Arbuckle, including how the poroelastic properties change laterally and over time are of significant interest. We report observations of fluid level changes in two monitoring wells in response to the 3 September 2016 Mw 5.8 Pawnee and the 7 November 2016 Mw 5.0 Cushing earthquakes. We investigate the relationship between static strain resulting from these events and the fluid level changes observed in the wells. We model the fluid level response by estimating static strains from a set of earthquake source parameters and spatiotemporal poroelastic properties of the Arbuckle in the neighborhood of the monitoring wells. Results suggest that both the direction of the observed fluid level step and the amplitude can be predicted from the computed volumetric strain change and a reasonable set of poroelastic parameters. Modeling results indicate that poroelastic parameters differ at the time of the Pawnee and Cushing earthquakes, with a moderately higher Skempton’s coefficient required to fit the response to the Cushing earthquake. This may indicate that dynamic shaking resulted in physical alteration of the Arbuckle at distances up to ∼50 km from the Pawnee earthquake.
Earthquake Emergency Education in Dushanbe, Tajikistan
ERIC Educational Resources Information Center
Mohadjer, Solmaz; Bendick, Rebecca; Halvorson, Sarah J.; Saydullaev, Umed; Hojiboev, Orifjon; Stickler, Christine; Adam, Zachary R.
2010-01-01
We developed a middle school earthquake science and hazards curriculum to promote earthquake awareness to students in the Central Asian country of Tajikistan. These materials include pre- and post-assessment activities, six science activities describing physical processes related to earthquakes, five activities on earthquake hazards and mitigation…
NASA Astrophysics Data System (ADS)
Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian
2017-11-01
Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.
On near-source earthquake triggering
Parsons, T.; Velasco, A.A.
2009-01-01
When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.
Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction
NASA Astrophysics Data System (ADS)
Allen, A. A.; Chen, X.
2017-12-01
The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes within the array.
Overview of seismic potential in the central and eastern United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweig, E.S.
1995-12-31
The seismic potential of any region can be framed in terms the locations of source zones, the frequency of earthquake occurrence for each source, and the maximum size earthquake that can be expect from each source. As delineated by modern and historical seismicity, the most important seismic source zones affecting the eastern United States include the New Madrid and Wabash Valley seismic zones of the central U.S., the southern Appalachians and Charleston, South Carolina, areas in the southeast, and the northern Appalachians and Adirondacks in the northeast. The most prominant of these in terms of current seismicity and historical seismicmore » moment release in the New Madrid seismic zone, which produced three earthquakes of moment magnitude {ge} 8 in 1811 and 1812. The frequency of earthquake recurrence can be examined using the instrumental record, the historical record, and the geological record. Each record covers a unique time period and has a different scale of temporal resolution and completeness of the data set. The Wabash Valley is an example where the long-term geological record indicates a greater potential than the instrumental and historical records. This points to the need to examine all of the evidence in any region in order to obtain a credible estimates of earthquake hazards. Although earthquake hazards may be dominated by mid-magnitude 6 earthquakes within the mapped seismic source zones, the 1994 Northridge, California, earthquake is just the most recent example of the danger of assuming future events will occur on faults known to have had past events and how destructive such an earthquake can be.« less
NASA Astrophysics Data System (ADS)
Asano, K.
2017-12-01
An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling relationship for past plate-boundary earthquakes along the Japan trench, northeast Japan. This finding implies that the source characteristics of plate-boundary events in the Nankai trough are different from those in the Japan Trench, and it could be important information to consider regional variation in ground motion prediction.
Generalized statistical mechanics approaches to earthquakes and tectonics.
Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios
2016-12-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.
Generalized statistical mechanics approaches to earthquakes and tectonics
Papadakis, Giorgos; Michas, Georgios
2016-01-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548
Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zöller, G.
2012-04-01
As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).
Rapid estimate of earthquake source duration: application to tsunami warning.
NASA Astrophysics Data System (ADS)
Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier
2016-04-01
We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J. and Reymond, D. (2014). New Tsunami Forecast Tools for the French Polynesia Tsunami Warning System. Pure Appl. Geophys, 171. DUPUTEL, Z., RIVERA, L., KANAMORI, H. and HAYES, G. (2012). Wphase source inversion for moderate to large earthquakes. Geophys. J. Intl.189, 1125-1147. Kanamori, H. (1972). Mechanism of tsunami earthquakes. Phys. Earth Planet. Inter. 6, 246-259. Kanamori, H. and Rivera, L. (2008). Source inversion of W phase : speeding up seismic tsunami warning. Geophys. J. Intl. 175, 222-238. Newman, A. and Okal, E. (1998). Teleseismic estimates of radiated seismic energy : The E=M0 discriminant for tsunami earthquakes. J. Geophys. Res. 103, 26885-26898. Ni, S., H. Kanamori, and D. Helmberger (2005), Energy radiation from the Sumatra earthquake, Nature, 434, 582. Okal, E.A., and H. Hébert (2007), Far-field modeling of the 1946 Aleutian tsunami, Geophys. J. Intl., 169, 1229-1238. Vallée, M., J. Charléty, A.M.G. Ferreira, B. Delouis, and J. Vergoz, SCARDEC : a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body wave deconvolution, Geophys. J. Int., 184, 338-358, 2011.
Frequency-Dependent Tidal Triggering of Low Frequency Earthquakes Near Parkfield, California
NASA Astrophysics Data System (ADS)
Xue, L.; Burgmann, R.; Shelly, D. R.
2017-12-01
The effect of small periodic stress perturbations on earthquake generation is not clear, however, the rate of low-frequency earthquakes (LFEs) near Parkfield, California has been found to be strongly correlated with solid earth tides. Laboratory experiments and theoretical analyses show that the period of imposed forcing and source properties affect the sensitivity to triggering and the phase relation of the peak seismicity rate and the periodic stress, but frequency-dependent triggering has not been quantitatively explored in the field. Tidal forcing acts over a wide range of frequencies, therefore the sensitivity to tidal triggering of LFEs provides a good probe to the physical mechanisms affecting earthquake generation. In this study, we consider the tidal triggering of LFEs near Parkfield, California since 2001. We find the LFEs rate is correlated with tidal shear stress, normal stress rate and shear stress rate. The occurrence of LFEs can also be independently modulated by groups of tidal constituents at semi-diurnal, diurnal and fortnightly frequencies. The strength of the response of LFEs to the different tidal constituents varies between LFE families. Each LFE family has an optimal triggering frequency, which does not appear to be depth dependent or systematically related to other known properties. This suggests the period of the applied forcing plays an important role in the triggering process, and the interaction of periods of loading history and source region properties, such as friction, effective normal stress and pore fluid pressure, produces the observed frequency-dependent tidal triggering of LFEs.
Microearthquake sequences along the Irpinia normal fault system in Southern Apennines, Italy
NASA Astrophysics Data System (ADS)
Orefice, Antonella; Festa, Gaetano; Alfredo Stabile, Tony; Vassallo, Maurizio; Zollo, Aldo
2013-04-01
Microearthquakes reflect a continuous readjustment of tectonic structures, such as faults, under the action of local and regional stress fields. Low magnitude seismicity in the vicinity of active fault zones may reveal insights into the mechanics of the fault systems during the inter-seismic period and shine a light on the role of fluids and other physical parameters in promoting or disfavoring the nucleation of larger size events in the same area. Here we analyzed several earthquake sequences concentrated in very limited regions along the 1980 Irpinia earthquake fault zone (Southern Italy), a complex system characterized by normal stress regime, monitored by the dense, multi-component, high dynamic range seismic network ISNet (Irpinia Seismic Network). On a specific single sequence, the May 2008 Laviano swarm, we performed accurate absolute and relative locations and estimated source parameters and scaling laws that were compared with standard stress-drops computed for the area. Additionally, from EGF deconvolution, we computed a slip model for the mainshock and investigated the space-time evolution of the events in the sequence to reveal possible interactions among earthquakes. Through the massive analysis of cross-correlation based on the master event scanning of the continuous recording, we also reconstructed the catalog of repeated earthquakes and recognized several co-located sequences. For these events, we analyzed the statistical properties, location and source parameters and their space-time evolution with the aim of inferring the processes that control the occurrence and the size of microearthquakes in a swarm.
The August 2011 Virginia and Colorado Earthquake Sequences: Does Stress Drop Depend on Strain Rate?
NASA Astrophysics Data System (ADS)
Abercrombie, R. E.; Viegas, G.
2011-12-01
Our preliminary analysis of the August 2011 Virginia earthquake sequence finds the earthquakes to have high stress drops, similar to those of recent earthquakes in NE USA, while those of the August 2011 Trinidad, Colorado, earthquakes are moderate - in between those typical of interplate (California) and the east coast. These earthquakes provide an unprecedented opportunity to study such source differences in detail, and hence improve our estimates of seismic hazard. Previously, the lack of well-recorded earthquakes in the eastern USA severely limited our resolution of the source processes and hence the expected ground accelerations. Our preliminary findings are consistent with the idea that earthquake faults strengthen during longer recurrence times and intraplate faults fail at higher stress (and produce higher ground accelerations) than their interplate counterparts. We use the empirical Green's function (EGF) method to calculate source parameters for the Virginia mainshock and three larger aftershocks, and for the Trinidad mainshock and two larger foreshocks using IRIS-available stations. We select time windows around the direct P and S waves at the closest stations and calculate spectral ratios and source time functions using the multi-taper spectral approach (eg. Viegas et al., JGR 2010). Our preliminary results show that the Virginia sequence has high stress drops (~100-200 MPa, using Madariaga (1976) model), and the Colorado sequence has moderate stress drops (~20 MPa). These numbers are consistent with previous work in the regions, for example the Au Sable Forks (2002) earthquake, and the 2010 Germantown (MD) earthquake. We also calculate the radiated seismic energy and find the energy/moment ratio to be high for the Virginia earthquakes, and moderate for the Colorado sequence. We observe no evidence of a breakdown in constant stress drop scaling in this limited number of earthquakes. We extend our analysis to a larger number of earthquakes and stations. We calculate uncertainties in all our measurements, and also consider carefully the effects of variation in available bandwidth in order to improve our constraints on the source parameters.
Bohnhoff, Marco; Dresen, Georg; Ellsworth, William L.; Ito, Hisao; Cloetingh, Sierd; Negendank, Jörg
2010-01-01
An important discovery in crustal mechanics has been that the Earth’s crust is commonly stressed close to failure, even in tectonically quiet areas. As a result, small natural or man-made perturbations to the local stress field may trigger earthquakes. To understand these processes, Passive Seismic Monitoring (PSM) with seismometer arrays is a widely used technique that has been successfully applied to study seismicity at different magnitude levels ranging from acoustic emissions generated in the laboratory under controlled conditions, to seismicity induced by hydraulic stimulations in geological reservoirs, and up to great earthquakes occurring along plate boundaries. In all these environments the appropriate deployment of seismic sensors, i.e., directly on the rock sample, at the earth’s surface or in boreholes close to the seismic sources allows for the detection and location of brittle failure processes at sufficiently low magnitude-detection threshold and with adequate spatial resolution for further analysis. One principal aim is to develop an improved understanding of the physical processes occurring at the seismic source and their relationship to the host geologic environment. In this paper we review selected case studies and future directions of PSM efforts across a wide range of scales and environments. These include induced failure within small rock samples, hydrocarbon reservoirs, and natural seismicity at convergent and transform plate boundaries. Each example represents a milestone with regard to bridging the gap between laboratory-scale experiments under controlled boundary conditions and large-scale field studies. The common motivation for all studies is to refine the understanding of how earthquakes nucleate, how they proceed and how they interact in space and time. This is of special relevance at the larger end of the magnitude scale, i.e., for large devastating earthquakes due to their severe socio-economic impact.
Automated Determination of Magnitude and Source Length of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.
2017-12-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Automated Determination of Magnitude and Source Extent of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, Dun
2017-04-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Probabilistic Risk Analysis of Run-up and Inundation in Hawaii due to Distant Tsunamis
NASA Astrophysics Data System (ADS)
Gica, E.; Teng, M. H.; Liu, P. L.
2004-12-01
Risk assessment of natural hazards usually includes two aspects, namely, the probability of the natural hazard occurrence and the degree of damage caused by the natural hazard. Our current study is focused on the first aspect, i.e., the development and evaluation of a methodology that can predict the probability of coastal inundation due to distant tsunamis in the Pacific Basin. The calculation of the probability of tsunami inundation could be a simple statistical problem if a sufficiently long record of field data on inundation was available. Unfortunately, such field data are very limited in the Pacific Basin due to the reason that field measurement of inundation requires the physical presence of surveyors on site. In some areas, no field measurements were ever conducted in the past. Fortunately, there are more complete and reliable historical data on earthquakes in the Pacific Basin partly because earthquakes can be measured remotely. There are also numerical simulation models such as the Cornell COMCOT model that can predict tsunami generation by an earthquake, propagation in the open ocean, and inundation onto a coastal land. Our objective is to develop a methodology that can link the probability of earthquakes in the Pacific Basin with the inundation probability in a coastal area. The probabilistic methodology applied here involves the following steps: first, the Pacific Rim is divided into blocks of potential earthquake sources based on the past earthquake record and fault information. Then the COMCOT model is used to predict the inundation at a distant coastal area due to a tsunami generated by an earthquake of a particular magnitude in each source block. This simulation generates a response relationship between the coastal inundation and an earthquake of a particular magnitude and location. Since the earthquake statistics is known for each block, by summing the probability of all earthquakes in the Pacific Rim, the probability of the inundation in a coastal area can be determined through the response relationship. Although the idea of the statistical methodology applied here is not new, this study is the first to apply it to study the probability of inundation caused by earthquake-generated distant tsunamis in the Pacific Basin. As a case study, the methodology is applied to predict the tsunami inundation risk in Hilo Bay in Hawaii. Since relatively more field data on tsunami inundation are available for Hilo Bay, this case study can help to evaluate the applicability of the methodology for predicting tsunami inundation risk in the Pacific Basin. Detailed results will be presented at the AGU meeting.
NASA Astrophysics Data System (ADS)
Obana, Koichiro; Nakamura, Yasuyuki; Fujie, Gou; Kodaira, Shuichi; Kaiho, Yuka; Yamamoto, Yojiro; Miura, Seiichi
2018-03-01
In the northern part of the Japan Trench, the 1933 Showa-Sanriku earthquake (Mw 8.4), an outer-trench, normal-faulting earthquake, occurred 37 yr after the 1896 Meiji-Sanriku tsunami earthquake (Mw 8.0), a shallow, near-trench, plate-interface rupture. Tsunamis generated by both earthquakes caused severe damage along the Sanriku coast. Precise locations of earthquakes in the source areas of the 1896 and 1933 earthquakes have not previously been obtained because they occurred at considerable distances from the coast in deep water beyond the maximum operational depth of conventional ocean bottom seismographs (OBSs). In 2015, we incorporated OBSs designed for operation in deep water (ultradeep OBSs) in an OBS array during two months of seismic observations in the source areas of the 1896 and 1933 Sanriku earthquakes to investigate the relationship of seismicity there to outer-rise normal-faulting earthquakes and near-trench tsunami earthquakes. Our analysis showed that seismicity during our observation period occurred along three roughly linear trench-parallel trends in the outer-trench region. Seismic activity along these trends likely corresponds to aftershocks of the 1933 Showa-Sanriku earthquake and the Mw 7.4 normal-faulting earthquake that occurred 40 min after the 2011 Tohoku-Oki earthquake. Furthermore, changes of the clarity of reflections from the oceanic Moho on seismic reflection profiles and low-velocity anomalies within the oceanic mantle were observed near the linear trends of the seismicity. The focal mechanisms we determined indicate that an extensional stress regime extends to about 40 km depth, below which the stress regime is compressional. These observations suggest that rupture during the 1933 Showa-Sanriku earthquake did not extend to the base of the oceanic lithosphere and that compound rupture of multiple or segmented faults is a more plausible explanation for that earthquake. The source area of the 1896 Meiji-Sanriku tsunami earthquake is characterized by an aseismic region landward of the trench axis. Spatial heterogeneity of seismicity and crustal structure might indicate the near-trench faults that could lead to future hazardous events such as the 1896 and 1933 Sanriku earthquakes, and should be taken into account in assessment of tsunami hazards related to large near-trench earthquakes.
The Electronic Encyclopedia of Earthquakes
NASA Astrophysics Data System (ADS)
Benthien, M.; Marquis, J.; Jordan, T.
2003-12-01
The Electronic Encyclopedia of Earthquakes is a collaborative project of the Southern California Earthquake Center (SCEC), the Consortia of Universities for Research in Earthquake Engineering (CUREE) and the Incorporated Research Institutions for Seismology (IRIS). This digital library organizes earthquake information online as a partner with the NSF-funded National Science, Technology, Engineering and Mathematics (STEM) Digital Library (NSDL) and the Digital Library for Earth System Education (DLESE). When complete, information and resources for over 500 Earth science and engineering topics will be included, with connections to curricular materials useful for teaching Earth Science, engineering, physics and mathematics. Although conceived primarily as an educational resource, the Encyclopedia is also a valuable portal to anyone seeking up-to-date earthquake information and authoritative technical sources. "E3" is a unique collaboration among earthquake scientists and engineers to articulate and document a common knowledge base with a shared terminology and conceptual framework. It is a platform for cross-training scientists and engineers in these complementary fields and will provide a basis for sustained communication and resource-building between major education and outreach activities. For example, the E3 collaborating organizations have leadership roles in the two largest earthquake engineering and earth science projects ever sponsored by NSF: the George E. Brown Network for Earthquake Engineering Simulation (CUREE) and the EarthScope Project (IRIS and SCEC). The E3 vocabulary and definitions are also being connected to a formal ontology under development by the SCEC/ITR project for knowledge management within the SCEC Collaboratory. The E3 development system is now fully operational, 165 entries are in the pipeline, and the development teams are capable of producing 20 new, fully reviewed encyclopedia entries each month. Over the next two years teams will complete 450 entries, which will populate the E3 collection to a level that fully spans earthquake science and engineering. Scientists, engineers, and educators who have suggestions for content to be included in the Encyclopedia can visit www.earthquake.info now to complete the "Suggest a Web Page" form.
Generalized interferometry - I: theory for interstation correlations
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; Stehly, Laurent; Ermert, Laura; Boehm, Christian
2017-02-01
We develop a general theory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on Earth structure. Not making any restrictive assumptions on the nature of the wavefield sources, our theory can be applied to earthquake and ambient noise data, either separately or combined. This allows us (i) to locate earthquakes using interstation correlations and without knowledge of the origin time, (ii) to unify the earthquake-based two-station method and noise correlations without the need to exclude either of the two data types, and (iii) to eliminate the requirement to remove earthquake signals from noise recordings prior to the computation of correlation functions. In addition to the basic theory for acoustic wavefields, we present numerical examples for 2-D media, an extension to the most general viscoelastic case, and a method for the design of optimal processing schemes that eliminate the forward modelling error completely. This work is intended to provide a comprehensive theoretical foundation of full-waveform interferometry by correlation, and to suggest improvements to current passive monitoring methods.
Rapid tsunami models and earthquake source parameters: Far-field and local applications
Geist, E.L.
2005-01-01
Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
Snelson, Catherine M.; Abbott, Robert E.; Broome, Scott T.; ...
2013-07-02
A series of chemical explosions, called the Source Physics Experiments (SPE), is being conducted under the auspices of the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to develop a new more physics-based paradigm for nuclear test monitoring. Currently, monitoring relies on semi-empirical models to discriminate explosions from earthquakes and to estimate key parameters such as yield. While these models have been highly successful monitoring established test sites, there is concern that future tests could occur in media and at scale depths of burial outside of our empirical experience. This is highlighted by North Korean tests, which exhibit poormore » performance of a reliable discriminant, mb:Ms (Selby et al., 2012), possibly due to source emplacement and differences in seismic responses for nascent and established test sites. The goal of SPE is to replace these semi-empirical relationships with numerical techniques grounded in a physical basis and thus applicable to any geologic setting or depth.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Paul A.
Nonlinear dynamics induced by seismic sources and seismic waves are common in Earth. Observations range from seismic strong ground motion (the most damaging aspect of earthquakes), intense near-source effects, and distant nonlinear effects from the source that have important consequences. The distant effects include dynamic earthquake triggering-one of the most fascinating topics in seismology today-which may be elastically nonlinearly driven. Dynamic earthquake triggering is the phenomenon whereby seismic waves generated from one earthquake trigger slip events on a nearby or distant fault. Dynamic triggering may take place at distances thousands of kilometers from the triggering earthquake, and includes triggering ofmore » the entire spectrum of slip behaviors currently identified. These include triggered earthquakes and triggered slow, silent-slip during which little seismic energy is radiated. It appears that the elasticity of the fault gouge-the granular material located between the fault blocks-is key to the triggering phenomenon.« less
Estimating small amplitude tremor sources
NASA Astrophysics Data System (ADS)
Katakami, S.; Ito, Y.; Ohta, K.
2017-12-01
Various types of slow earthquakes have been recently observed at both the updip and downdip edges of the coseismic slip areas [Obara and Kato, 2016]. Frequent occurrence of slow earthquakes may help us to reveal the physics underlying megathrust events as useful analogs. Maeda and Obara [2009] estimated spatiotemporal distribution of seismic energy radiation from low-frequency tremors. They applied their method to only the tremors, whose hypocenters had been decided with multiple station method. However, recently Katakami et al. (2016) identified a lot of continuous tremors with small amplitude that were not recorded multiple stations. These small events should be important to reveal the whole slow earthquake activity and to understand strain condition around a plate boundary in subduction zones. First, we apply the modified frequency scanning method (mFSM) at a single station to NIED Hi-net data in the southwestern Japan to understand whole tremor activity which were included weak signal tremors. Second, we developed a method to identify the tremor source area by using the difference of apparent tremor energy at each station by mFSM. We estimated the apparent source tremor energy after correcting both site amplification factor and geometrical spreading. Finally we calculate a tremor source area if the difference of apparent tremor energy between each pair of sites is the smallest. We checked a validity of this analysis by using only tremors which were already detected by envelope correlation method [Idehara et al., 2014]. We calculated the average amplitude as apparent tremor energy in 5 minutes window after occurring tremor at each station. Our results almost consistent to hypocenters which were determined the envelope correlation method. We successfully determined apparent tremor source areas of weak continuous tremors after estimating possible tremor occurrence time windows by using mFSM.
NASA Astrophysics Data System (ADS)
Meng, L.; Zhang, A.; Yagi, Y.
2015-12-01
The 2015 Mw 7.8 Nepal-Gorkha earthquake with casualties of over 9,000 people is the most devastating disaster to strike Nepal since the 1934 Nepal-Bihar earthquake. Its rupture process is well imaged by the teleseismic MUSIC back-projections (BP). Here, we perform independent back-projections of high-frequency recordings (0.5-2 Hz) from the Australian seismic network (AU), the North America network (NA) and the European seismic network (EU), located in complementary orientations. Our results of all three arrays show unilateral linear rupture path to the east of the hypocenter. But the propagating directions and the inferred rupture speeds differ significantly among different arrays. To understand the spatial uncertainties of the BP analysis, we image four moderate-size (M5~6) aftershocks based on the timing correction derived from the alignment of the initial P-wave of the mainshock. We find that the apparent source locations inferred from BP are systematically biased along the source-array orientation, which can be explained by the uncertainty of the 3D velocity structure deviated from the 1D reference model (e.g. IASP91). We introduced a slowness error term in travel time as a first-order calibration that successfully mitigates the source location discrepancies of different arrays. The calibrated BP results of three arrays are mutually consistent and reveal a unilateral rupture propagating eastward at a speed of 2.7 km/s along the down-dip edge of the locked Himalaya thrust zone over ~ 150 km, in agreement with a narrow slip distribution inferred from finite source inversions.
Detailed source process of the 2007 Tocopilla earthquake.
NASA Astrophysics Data System (ADS)
Peyrat, S.; Madariaga, R.; Campos, J.; Asch, G.; Favreau, P.; Bernard, P.; Vilotte, J.
2008-05-01
We investigated the detail rupture process of the Tocopilla earthquake (Mw 7.7) of the 14 November 2007 and of the main aftershocks that occurred in the southern part of the North Chile seismic gap using strong motion data. The earthquake happen in the middle of the permanent broad band and strong motion network IPOC newly installed by GFZ and IPGP, and of a digital strong-motion network operated by the University of Chile. The Tocopilla earthquake is the last large thrust subduction earthquake that occurred since the major Iquique 1877 earthquake which produced a destructive tsunami. The Arequipa (2001) and Antofagasta (1995) earthquakes already ruptured the northern and southern parts of the gap, and the intraplate intermediate depth Tarapaca earthquake (2005) may have changed the tectonic loading of this part of the Peru-Chile subduction zone. For large earthquakes, the depth of the seismic rupture is bounded by the depth of the seismogenic zone. What controls the horizontal extent of the rupture for large earthquakes is less clear. Factors that influence the extent of the rupture include fault geometry, variations of material properties and stress heterogeneities inherited from the previous ruptures history. For subduction zones where structures are not well known, what may have stopped the rupture is not obvious. One crucial problem raised by the Tocopilla earthquake is to understand why this earthquake didn't extent further north, and at south, what is the role of the Mejillones peninsula that seems to act as a barrier. The focal mechanism was determined using teleseismic waveforms inversion and with a geodetic analysis (cf. Campos et al.; Bejarpi et al., in the same session). We studied the detailed source process using the strong motion data available. This earthquake ruptured the interplate seismic zone over more than 150 km and generated several large aftershocks, mainly located south of the rupture area. The strong-motion data show clearly two S-waves arrivals, allowing the localization of the 2 sources. The main shock started north of the segment close to Tocopilla. The rupture propagated southward. The second source was identified to start about 20 seconds later and was located 50 km south from the hypocenter. The network configuration provides a good resolution for the inverted slip distribution in the north-south direction, but a lower resolution for the east-west extent of the slip. However, this study of the source process of this earthquake shows a complex source with at least two slip asperities of different dynamical behavior.
Amending and complicating Chile’s seismic catalog with the Santiago earthquake of 7 August 1580
NASA Astrophysics Data System (ADS)
Cisternas, Marco; Torrejón, Fernando; Gorigoitia, Nicolás
2012-02-01
Historical earthquakes of Chile's metropolitan region include a previously uncatalogued earthquake that occurred on 7 August 1580 in the Julian calendar. We found an authoritative account of this earthquake in a letter written four days later in Santiago and now archived in Spain. The letter tells of a destructive earthquake that struck Santiago and its environs. In its reported effects it surpassed the one in the same city in 1575, until now presumed to be the only earthquake in the first century of central Chile's written history. It is not yet possible to identify the source of the 1580 earthquake but viable candidates include both the plate boundary and Andean faults at shallows depths around Santiago. By occurring just five years after another large earthquake, the 1580 earthquake casts doubt on the completeness of the region's historical earthquake catalog and the periodicity of its large earthquakes. That catalog, based on eyewitness accounts compiled mainly by Alexander Perrey and Fernand Montessus de Ballore, tells of large Chile's metropolitan region earthquakes in 1575, 1647, 1730, 1822, 1906 and 1985. The addition of a large earthquake in 1580 implies greater variability in recurrence intervals and may also mean greater variety in earthquake sources.
NASA Astrophysics Data System (ADS)
Vater, Stefan; Behrens, Jörn
2017-04-01
Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.
Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes
NASA Astrophysics Data System (ADS)
Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.
2015-12-01
Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada. Preliminary analysis indicates that the induced ground motions appear similar to those from the NGA-West2 database. However, upon consideration of their shallower depths, ground motion behavior from induced events seems to fall in between the West data and that of NGA-East, so we explore the control of stress drop and depth on ground motion in more detail.
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.
Mueller, Charles S.
1985-01-01
Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.
NASA Astrophysics Data System (ADS)
Chen, X.; Abercrombie, R. E.; Pennington, C.
2017-12-01
Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.
Earthquake and submarine landslide tsunamis: how can we tell the difference? (Invited)
NASA Astrophysics Data System (ADS)
Tappin, D. R.; Grilli, S. T.; Harris, J.; Geller, R. J.; Masterlark, T.; Kirby, J. T.; Ma, G.; Shi, F.
2013-12-01
Several major recent events have shown the tsunami hazard from submarine mass failures (SMF), i.e., submarine landslides. In 1992 a small earthquake triggered landslide generated a tsunami over 25 meters high on Flores Island. In 1998 another small, earthquake-triggered, sediment slump-generated tsunami up to 15 meters high devastated the local coast of Papua New Guinea killing 2,200 people. It was this event that led to the recognition of the importance of marine geophysical data in mapping the architecture of seabed sediment failures that could be then used in modeling and validating the tsunami generating mechanism. Seabed mapping of the 2004 Indian Ocean earthquake rupture zone demonstrated, however, that large, if not great, earthquakes do not necessarily cause major seabed failures, but that along some convergent margins frequent earthquakes result in smaller sediment failures that are not tsunamigenic. Older events, such as Messina, 1908, Makran, 1945, Alaska, 1946, and Java, 2006, all have the characteristics of SMF tsunamis, but for these a SMF source has not been proven. When the 2011 tsunami struck Japan, it was generally assumed that it was directly generated by the earthquake. The earthquake has some unusual characteristics, such as a shallow rupture that is somewhat slow, but is not a 'tsunami earthquake.' A number of simulations of the tsunami based on an earthquake source have been published, but in general the best results are obtained by adjusting fault rupture models with tsunami wave gauge or other data so, to the extent that they can model the recorded tsunami data, this demonstrates self-consistency rather than validation. Here we consider some of the existing source models of the 2011 Japan event and present new tsunami simulations based on a combination of an earthquake source and an SMF mapped from offshore data. We show that the multi-source tsunami agrees well with available tide gauge data and field observations and the wave data from offshore buoys, and that the SMF generated the large runups in the Sanriku region (northern Tohoku). Our new results for the 2011 Tohoku event suggest that care is required in using tsunami wave and tide gauge data to both model and validate earthquake tsunami sources. They also suggest a potential pitfall in the use of tsunami waveform inversion from tide gauges and buoys to estimate the size and spatial characteristics of earthquake rupture. If the tsunami source has a significant SMF component such studies may overestimate earthquake magnitude. Our seabed mapping identifies other large SMFs off Sanriku that have the potential to generate significant tsunamis and which should be considered in future analyses of the tsunami hazard in Japan. The identification of two major SMF-generated tsunamis (PNG and Tohoku), especially one associated with a M9 earthquake, is important in guiding future efforts at forecasting and mitigating the tsunami hazard from large megathrust plus SMF events both in Japan and globally.
Opportunities for Undergraduates to Engage in Research Using Seismic Data and Data Products
NASA Astrophysics Data System (ADS)
Taber, J. J.; Hubenthal, M.; Benoit, M. H.
2014-12-01
Introductory Earth science classes can become more interactive through the use of a range of seismic data and models that are available online, which students can use to conduct simple research regarding earthquakes and earth structure. One way to introduce students to these data sets is via a new set of six intro-level classroom activities designed to introduce undergraduates to some of the grand challenges in seismology research. The activities all use real data sets and some require students to collect their own data, either using physical models or via Web sites and Web applications. While the activities are designed to step students through a learning sequence, several of the activities are open-ended and can be expanded to research topics. For example, collecting and analyzing data from a deceptively simple physical model of earthquake behavior can lead students to query a map-based seismicity catalog via the IRIS Earthquake Browser to study seismicity rates and the distribution of earthquake magnitudes, and make predictions about the earthquake hazards in regions of their choosing. In another activity, students can pose their own questions and reach conclusions regarding the correlation between hydraulic fracturing, waste water disposal, and earthquakes. Other data sources are available for students to engage in self-directed research projects. For students with an interest in instrumentation, they can conduct research relating to instrument calibration and sensitivity using a simple educational seismometer. More advanced students can explore tomographic models of seismic velocity structure, and examine research questions related to earth structure, such as the correlation of topography to crustal thickness, and the fate of subducted slabs. The type of faulting in a region can be explored using a map-based catalog of focal mechanisms, allowing students to analyze the spatial distribution of normal, thrust and strike-slip events in a subduction zone region. For all of these topics and data sets, the societal impact of earthquakes can provide an additional motivation for students to engage in their research. www.iris.edu
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.
NASA Astrophysics Data System (ADS)
Major, J. R.; Liu, Z.; Harris, R. A.; Fisher, T. L.
2011-12-01
Using Dutch records of geophysical events in Indonesia over the past 400 years, and tsunami modeling, we identify tsunami sources that have caused severe devastation in the past and are likely to reoccur in the near future. The earthquake history of Western Indonesia has received much attention since the 2004 Sumatra earthquakes and subsequent events. However, strain rates along a variety of plate boundary segments are just as high in eastern Indonesia where the earthquake history has not been investigated. Due to the rapid population growth in this region it is essential and urgent to evaluate its earthquake and tsunami hazards. Arthur Wichmann's 'Earthquakes of the Indian Archipelago' shows that there were 30 significant earthquakes and 29 tsunami between 1629 to 1877. One of the largest and best documented is the great earthquake and tsunami effecting the Banda islands on 1 August, 1629. It caused severe damage from a 15 m tsunami that arrived at the Banda Islands about a half hour after the earthquake. The earthquake was also recorded 230 km away in Ambon, but no tsunami is mentioned. This event was followed by at least 9 years of aftershocks. The combination of these observations indicates that the earthquake was most likely a mega-thrust event. We use a numerical simulation of the tsunami to locate the potential sources of the 1629 mega-thrust event and evaluate the tsunami hazard in Eastern Indonesia. The numerical simulation was tested to establish the tsunami run-up amplification factor for this region by tsunami simulations of the 1992 Flores Island (Hidayat et al., 1995) and 2006 Java (Katoet al., 2007) earthquake events. The results yield a tsunami run-up amplification factor of 1.5 and 3, respectively. However, the Java earthquake is a unique case of slow rupture that was hardly felt. The fault parameters of recent earthquakes in the Banda region are used for the models. The modeling narrows the possibilities of mega-thrust events the size of the one in 1629 to the Seram and Timor Troughs. For the Seram Trough source a Mw 8.8 produces run-up heights in the Banda Islands of 15.5 m with an arrival time of 17 minuets. For a Timor Trough earthquake near the Tanimbar Islands a Mw 9.2 is needed to produce a 15 m run-up height with an arrival time of 25 minuets. The main problem with the Timor Trough source is that it predicts run-up heights in Ambon of 10 m, which would likely have been recorded. Therefore, we conclude that the most likely source of the 1629 mega-thrust earthquake is the Seram Trough. No large earthquakes are reported along the Seram Trough for over 200 years although high rates of strain are measured across it. This study suggests that the earthquake triggers from this fault zone could be extremely devastating to Eastern Indonesia. We strive to raise the awareness to the local government to not underestimate the natural hazard of this region based on lessons learned from the 2004 Sumatra and 2011 Tohoku tsunamigenic mega-thrust earthquakes.
Toward Broadband Source Modeling for the Himalayan Collision Zone
NASA Astrophysics Data System (ADS)
Miyake, H.; Koketsu, K.; Kobayashi, H.; Sharma, B.; Mishra, O. P.; Yokoi, T.; Hayashida, T.; Bhattarai, M.; Sapkota, S. N.
2017-12-01
The Himalayan collision zone is characterized by the significant tectonic setting. There are earthquakes with low-angle thrust faulting as well as continental outerrise earthquakes. Recently several historical earthquakes have been identified by active fault surveys [e.g., Sapkota et al., 2013]. We here investigate source scaling for the Himalayan collision zone as a fundamental factor to construct source models toward seismic hazard assessment. As for the source scaling for collision zones, Yen and Ma [2011] reported the subduction-zone source scaling in Taiwan, and pointed out the non-self-similar scaling due to the finite crustal thickness. On the other hand, current global analyses of stress drop do not show abnormal values for the continental collision zones [e.g., Allmann and Shearer, 2009]. Based on the compile profiling of finite thickness of the curst and dip angle variations, we discuss whether the bending exists for the Himalayan source scaling and implications on stress drop that will control strong ground motions. Due to quite low-angle dip faulting, recent earthquakes in the Himalayan collision zone showed the upper bound of the current source scaling of rupture area vs. seismic moment (< Mw 8.0), and does not show significant bending of the source scaling. Toward broadband source modeling for ground motion prediction, we perform empirical Green's function simulations for the 2009 Butan and 2015 Gorkha earthquake sequence to quantify both long- and short-period source spectral levels.
NASA Astrophysics Data System (ADS)
Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.
2010-12-01
HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region coordinators can most effectively utilize their resources for earthquake risk mitigation. This study is being conducted in collaboration with King County, WA officials to determine the best model inputs necessary to generate robust HAZUS-MH models for the Pacific Northwest.
Learning from physics-based earthquake simulators: a minimal approach
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2017-04-01
Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.
NASA Astrophysics Data System (ADS)
Meng, L.; Ampuero, J. P.; Rendon, H.
2010-12-01
Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.
Precursory seismic quiescence: A preliminary assessment of the hypothesis
Reasenberg, P.A.; Matthews, M.V.
1988-01-01
Numerous cases of precursory seismic quiescence have been reported in recent years. Some investigators have interpreted these observations as evidence that seismic quiescence is a somewhat reliable precursor to moderate or large earthquakes. However, because failures of the pattern to predict earthquakes may not, in general, be reported, and because numerous earthquakes are not preceded by quiescence, the validity and reliability of the quiescence precursor have not been established. We have analyzed the seismicity rate prior to, and in the source region of, 37 shallow earthquakes (M 5.3-7.0) in central California and Japan for patterns of rate fluctuation, especially precursory quiescence. Nonuniformity in rate for these pre-mainshock sequences is relatively high, and numerous intervals with significant (p<0.10) extrema in rate are observed in some of the sequences. In other sequences, however, the rate remains within normal limits up to the time of the mainshock. Overall, in terms of an observational basis for intermediate-term earthquake prediction, no evidence is found in the cases studied for a systematic, widespread or reliable pattern of quiescence prior to the mainshocks. In earthquake sequences comprising full seismic cycles for 5 sets of (M 3.7-5.1) repeat earthquakes on the San Andreas fault near Bear Valley, California, the seismicity rates are found to be uniform. A composite of the estimated rate fluctuations for the sequences, normalized to the length of the seismic cycle, reveals a weak pattern of a low rate in the first third of the cycle, and a high rate in the last few months. While these observations are qualitative, they may represent weak expressions of physical processes occurring in the source region over the seismic cycle. Re-examination of seismicity rate fluctuations in volumes along the creeping section of the San Andreas fault specified by Wyss and Burford (1985) qualitatively confirms the existence of low-rate intervals in volumes 361, 386, 382, 372 and 401. However, only the quiescence in volume 386 is found by the present study to be statistically significant. ?? 1988 Birkha??user Verlag.
Using a pseudo-dynamic source inversion approach to improve earthquake source imaging
NASA Astrophysics Data System (ADS)
Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.
2014-12-01
Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.
NASA Astrophysics Data System (ADS)
Dalguer, L. A.; Baumann, C.; Cauzzi, C.
2013-12-01
Empirical ground motion prediction in the very near-field and for large magnitudes is often based on extrapolation of ground motion prediction equations (GMPEs) outside the range where they are well constrained by recorded data. With empirical GMPEs it is also difficult to capture source-dominated ground motion patterns, such as the effects of velocity pulses induced by subshear and supershear rupture directivity, buried and surface-rupturing, hanging-wall and foot-wall, weak shallow layers, complex geometry faults and stress drop. A way to cope at least in part with these shortcomings is to augment the calibration datasets with synthetic ground motions. To this aim, physics-based dynamic rupture models - where the physical bases involved in the fault rupture are explicitly considered - appear to be a suitable approach to produce synthetic ground motions. In this contribution, we first perform an assessment of a database of synthetic ground motions generated by a suite of dynamic rupture simulations to verify compatibility of the peak ground amplitudes with current GMPEs. The synthetic data-set is composed by 360 earthquake scenarios with moment magnitudes in the range of 5.5-7, for three mechanisms of faulting (reverse, normal and strike-slip) and for both buried faults and surface rupturing faults. Second, we parameterise the synthetic dataset through a GMPE. For this purpose, we identify the basic functional forms by analyzing the variation of the synthetic peak ground motions and spectral ordinates as a function of different explanatory variables related to the earthquake source characteristics, in order to account for some of the source effects listed above. We argue that this study provides basic guidelines for the developments of future GMPEs including data from physics-based numerical simulations.
Induced seismicity provides insight into why earthquake ruptures stop.
Galis, Martin; Ampuero, Jean Paul; Mai, P Martin; Cappa, Frédéric
2017-12-01
Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.
NASA Astrophysics Data System (ADS)
Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne
2018-03-01
The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is 'predictable' or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily 'predictable' in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock-aftershock sequences. Thus, we may be able to 'predict' what size earthquakes to expect at The Geysers following a large distant earthquake.
Testing for the ‘predictability’ of dynamically triggered earthquakes in Geysers Geothermal Field
Aiken, Chastity; Meng, Xiaofeng; Hardebeck, Jeanne L.
2018-01-01
The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is ‘predictable’ or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily ‘predictable’ in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock–aftershock sequences. Thus, we may be able to ‘predict’ what size earthquakes to expect at The Geysers following a large distant earthquake.
NASA Astrophysics Data System (ADS)
Pulinets, S. A.; Dunajecka, M. A.
2007-02-01
The recent development of the Lithosphere-Atmosphere-Ionosphere (LAI) coupling model and experimental data of remote sensing satellites on thermal anomalies before major strong earthquakes have demonstrated that radon emanations in the area of earthquake preparation can produce variations of the air temperature and relative humidity. Specific repeating pattern of humidity and air temperature variations was revealed as a result of analysis of the meteorological data for several tens of strong earthquakes all over the world. The main physical process responsible for the observed variations is the latent heat release due to water vapor condensation on ions produced as a result of air ionization by energetic α-particles emitted by 222Rn. The high effectiveness of this process was proved by the laboratory and field experiments; hence the specific variations of air humidity and temperature can be used as indicator of radon variations before earthquakes. We analyzed the historical meteorological data all over the Mexico around the time of one of the most destructive earthquakes (Michoacan earthquake M8.1) that affected the Mexico City on September 19, 1985. Several distinct zones of specific variations of the air temperature and relative humidity were revealed that may indicate the different character of radon variations in different parts of Mexico before the Michoacan earthquake. The most interesting result on the specific variations of atmosphere parameters was obtained at Baja California region close to the border of Cocos and Rivera tectonic plates. This result demonstrates the possibility of the increased radon variations not only in the vicinity of the earthquake source but also at the border of interacting tectonic plates. Recent results on Thermal InfraRed (TIR) anomalies registered by Meteosat 5 before the Gujarat earthquake M7.9 on 26 of January 2001 supports the idea on the possibility of thermal effects at the border of interacting tectonic plates.
STSHV a teleinformatic system for historic seismology in Venezuela
NASA Astrophysics Data System (ADS)
Choy, J. E.; Palme, C.; Altez, R.; Aranguren, R.; Guada, C.; Silva, J.
2013-05-01
From 1997 on, when the first "Jornadas Venezolanas de Sismicidad Historica" took place, a big interest awoke in Venezuela to organize the available information related to historic earthquakes. At that moment only existed one published historic earthquake catalogue, that from Centeno Grau published the first time in 1949. That catalogue had no references about the sources of information. Other catalogues existed but they were internal reports for the petroleum companies and therefore difficult to access. In 2000 Grases et al reedited the Centeno-Grau catalogue, it ended up in a new, very complete catalogue with all the sources well referenced and updated. The next step to organize historic seismicity data was, from 2004 to 2008, the creation of the STSHV (Sistema de teleinformacion de Sismologia Historica Venezolana, http://sismicidad.hacer.ula.ve ). The idea was to bring together all information about destructive historic earthquakes in Venezuela in one place in the internet so it could be accessed easily by a widespread public. There are two ways to access the system. The first one, selecting an earthquake or a list of earthquakes, and the second one, selecting an information source or a list of sources. For each earthquake there is a summary of general information and additional materials: a list with the source parameters published by different authors, a list with intensities assessed by different authors, a list of information sources, a short text summarizing the historic situation at the time of the earthquake and a list of pictures if available. There are searching facilities for the seismic events and dynamic maps can be created. The information sources are classified in: books, handwritten documents, transcription of handwritten documents, documents published in books, journals and congress memories, newspapers, seismologic catalogues and electronic sources. There are facilities to find specific documents or lists of documents with common characteristics. For each document general information is displayed together with an extract of the information relating to the earthquake. If the complete document was available and no problem with the publishers rights a pdf copy of the document was included. We found this system extremely useful for studying historic earthquakes, as one can access immediately previous research works about an earthquake and it allows to check easily the historic information and so to validate the intensity data. So far, the intensity data have not been completed for earthquakes after 2000. This information would be important for improving calibration of intensity - magnitude calibrations of historic events, and is a work in progress. On the other hand, it is important to mention that "El Catálogo Sismológico Venezolano del siglo XX" (The Seismological Venezuelan Catalog), published in 2012, updates seismic information up to 2007, and that the STSHV was one of its primary sources of information.
Tsunami Source Estimate for the 1960 Chilean Earthquake from Near- and Far-Field Observations
NASA Astrophysics Data System (ADS)
Ho, T.; Satake, K.; Watada, S.; Fujii, Y.
2017-12-01
The tsunami source of the 1960 Chilean earthquake was estimated from the near- and far-field tsunami data. The 1960 Chilean earthquake is known as the greatest earthquake instrumentally ever recorded. This earthquake caused a large tsunami which was recorded by 13 near-field tidal gauges in South America, and 84 far-field stations around the Pacific Ocean at the coasts of North America, Asia, and Oceania. The near-field stations had been used for estimating the tsunami source [Fujii and Satake, Pageoph, 2013]. However, far-field tsunami waveforms have not been utilized because of the discrepancy between observed and simulated waveforms. The observed waveforms at the far-field stations are found systematically arrived later than the simulated waveforms. This phenomenon has been also observed in the tsunami of the 2004 Sumatra earthquake, the 2010 Chilean earthquake, and the 2011 Tohoku earthquake. Recently, the factors for the travel time delay have been explained [Watada et al., JGR, 2014; Allgeyer and Cummins, GRL, 2014], so the far-field data are usable for tsunami source estimation. The phase correction method [Watada et al., JGR, 2014] converts the tsunami waveforms computed by the linear long wave into the dispersive waveform which accounts for the effects of elasticity of the Earth and ocean, ocean density stratification, and gravitational potential change associated with tsunami propagation. We apply the method to correct the computed waveforms. For the preliminary initial sea surface height inversion, we use 12 near-field stations and 63 far-field stations, located in the South and North America, islands in the Pacific Ocean, and the Oceania. The estimated tsunami source from near-field stations is compared with the result from both near- and far-field stations. Two estimated sources show a similar pattern: a large sea surface displacement concentrated at the south of the epicenter close to the coast and extended to south. However, the source estimated from near-field stations shows larger displacement than one from both dataset.
QuakeUp: An advanced tool for a network-based Earthquake Early Warning system
NASA Astrophysics Data System (ADS)
Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo
2017-04-01
The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Within times of the order of ten seconds from the earthquake origin, the information about the area where moderate to strong ground shaking is expected to occur, can be sent to inner and outer sites, allowing the activation of emergency measurements to protect people , secure industrial facilities and optimize the site resilience after the disaster. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. In QuakeUp, the P-wave parameters are continuously measured, using progressively expanded P-wave time windows, and providing evolutionary and reliable estimates of the ground shaking distribution, especially in the case of very large events. Furthermore, to minimize the S-wave contamination on the P-wave signal portion, an efficient algorithm, based on the real-time polarization analysis of the three-component seismogram, for the automatic detection of the S-wave arrival time has been included. The final output of QuakeUp will be an automatic alert message that is transmitted to sites to be secured during the earthquake emergency. The message contains all relevant information about the expected potential damage at the site and the time available for security actions (lead-time) after the warning. A global view of the system performance during and after the event (in play-back mode) is obtained through an end-user visual display, where the most relevant pieces of information will be displayed and updated as soon as new data are available. The software platform Quake-Up is essentially aimed at improving the reliability and the accuracy in terms of parameter estimation, minimizing the uncertainties in the real-time estimations without losing the essential requirements of speediness and robustness, which are needed to activate rapid emergency actions.
Kobayashi, Satoru; Endo, Wakaba; Inui, Takehiko; Wakusawa, Keisuke; Tanaka, Soichiro; Onuma, Akira; Haginoya, Kazuhiro
2016-08-01
Takuto Rehabilitation Center for Children is located in Sendai, the capital of the Miyagi prefecture, and faces the Pacific Ocean. The tsunami caused by the Great East Japan Earthquake resulted in tremendous damage to this region. Many physically handicapped patients with epilepsy who are treated at our hospital could not obtain medicine. We surveyed patients with epilepsy, using a questionnaire to identify the problems during the acute phase of the Great East Japan Earthquake. After the earthquake, we mailed questionnaires to physically handicapped patients with epilepsy who are treated and prescribed medications at our hospital, or to their parents. A total of 161 respondents completed the questionnaire. Overall, 68.4% of patients had seven days or less of stockpiled medication when the earthquake initially struck, and 28.6% of patients had no medication or almost no medication during the acute phase after the earthquake. Six patients were forced to stop taking their medication and nine patients experienced a worsening of seizures. Most (93.6%) patients stated they require a stockpile of medication for more than seven days: 20months after the earthquake, 76.9% patients a supply of drugs for more than seven days. We suggest that physically handicapped patients with epilepsy are recommended to prepare for natural disasters by stockpiling additional medication. Even if the stock of antiepileptic drugs is sufficient, stress could cause worsening of seizures. Specialized support is required after a disaster among physically handicapped patients with epilepsy. Copyright © 2016 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yun, S.; Koketsu, K.; Aoki, Y.
2014-12-01
The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity structure of this region into account.
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.
Earthquake Forecasting in Northeast India using Energy Blocked Model
NASA Astrophysics Data System (ADS)
Mohapatra, A. K.; Mohanty, D. K.
2009-12-01
In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes. The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.
NASA Astrophysics Data System (ADS)
Shelly, David R.
2017-05-01
Low-frequency earthquakes (LFEs) are small, rapidly recurring slip events that occur on the deep extensions of some major faults. Their collective activation is often observed as a semicontinuous signal known as tectonic (or nonvolcanic) tremor. This manuscript presents a catalog of more than 1 million LFEs detected along the central San Andreas Fault from 2001 to 2016. These events have been detected via a multichannel matched-filter search, cross-correlating waveform templates representing 88 different LFE families with continuous seismic data. Together, these source locations span nearly 150 km along the central San Andreas Fault, ranging in depth from 16 to 30 km. This accumulating catalog has been the source for numerous studies examining the behavior of these LFE sources and the inferred slip behavior of the deep fault. The relatively high temporal and spatial resolutions of the catalog have provided new insights into properties such as tremor migration, recurrence, and triggering by static and dynamic stress perturbations. Collectively, these characteristics are inferred to reflect a very weak fault likely under near-lithostatic fluid pressure, yet the physical processes controlling the stuttering rupture observed as tremor and LFE signals remain poorly understood. This paper aims to document the LFE catalog assembly process and associated caveats, while also updating earlier observations and inferred physical constraints. The catalog itself accompanies this manuscript as part of the electronic supplement, with the goal of providing a useful resource for continued future investigations.
NASA Astrophysics Data System (ADS)
Carvajal, M.; Cisternas, M.; Catalán, P. A.
2017-05-01
Historical records of an earthquake that occurred in 1730 affecting Metropolitan Chile provide essential clues on the source characteristics for the future earthquakes in the region. The earthquake and tsunami of 1730 have been recognized as the largest to occur in Metropolitan Chile since the beginning of written history. The earthquake destroyed buildings along >1000 km of the coast and produced a large tsunami that caused damage as far as Japan. Here its source characteristics are inferred by comparing local tsunami inundations computed from hypothetical earthquakes with varying magnitude and depth, with those inferred from historical observations. It is found that a 600-800 km long rupture involving average slip amounts of 10-14 m (Mw 9.1-9.3) best explains the observed tsunami heights and inundations. This large earthquake magnitude is supported by the 1730 tsunami heights inferred in Japan. The inundation results combined with local uplift reports suggest a southward increase of the slip depth along the rupture zone of the 1730 earthquake. While shallow slip on the area to the north of the 2010 earthquake rupture zone is required to explain the reported inundation, only deeper slip at this area can explain the coastal uplift reports. Since the later earthquakes of the region involved little or no slip at shallow depths, the near-future earthquakes on Metropolitan Chile could release the shallow slip accumulated since 1730 and thus lead to strong tsunami excitation. Moderate shaking from a shallow earthquake could delay tsunami evacuation for the most populated coastal region of Chile.
NASA Astrophysics Data System (ADS)
Ichinose, Gene Aaron
The source parameters for eastern California and western Nevada earthquakes are estimated from regionally recorded seismograms using a moment tensor inversion. We use the point source approximation and fit the seismograms, at long periods. We generated a moment tensor catalog for Mw > 4.0 since 1997 and Mw > 5.0 since 1990. The catalog includes centroid depths, seismic moments, and focal mechanisms. The regions with the most moderate sized earthquakes in the last decade were in aftershock zones located in Eureka Valley, Double Spring Flat, Coso, Ridgecrest, Fish Lake Valley, and Scotty's Junction. The remaining moderate size earthquakes were distributed across the region. The 1993 (Mw 6.0) Eureka Valley earthquake occurred in the Eastern California Shear Zone. Careful aftershock relocations were used to resolve structure from aftershock clusters. The mainshock appears to rupture along the western side of the Last Change Range along a 30° to 60° west dipping fault plane, consistent with previous geodetic modeling. We estimate the source parameters for aftershocks at source-receiver distances less than 20 km using waveform modeling. The relocated aftershocks and waveform modeling results do not indicate any significant evidence of low angle faulting (dips > 30°. The results did reveal deformation along vertical faults within the hanging-wall block, consistent with observed surface rupture along the Saline Range above the dipping fault plane. The 1994 (Mw 5.8) Double Spring Flat earthquake occurred along the eastern Sierra Nevada between overlapping normal faults. Aftershock migration and cross fault triggering occurred in the following two years, producing seventeen Mw > 4 aftershocks The source parameters for the largest aftershocks were estimated from regionally recorded seismograms using moment tensor inversion. We estimate the source parameters for two moderate sized earthquakes which occurred near Reno, Nevada, the 1995 (Mw 4.4) Border Town, and the 1998 (Mw 4.7) Incline Village Earthquakes. We test to see how such stress interactions affected a cluster of six large earthquakes (Mw 6.6 to 7.5) between 1915 to 1954 within the Central Nevada Seismic Belt. We compute the static stress changes for these earthquake using dislocation models based on the location and amount of surface rupture. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Liu, Yang
2017-04-01
Ionospheric anomalies linked with devastating earthquakes have been widely investigated by scientists. It was confirmed that GNSS TECs suffered from drastically increase or decrease in some diurnal periods prior to the earthquakes. Liu et al (2008) applied a TECs anomaly calculation method to analyze M>=5.9 earthquakes in Indonesia and found TECs decadence within 2-7 days prior to the earthquakes. Nevertheless, strong TECs enhancement was observed before M8.0 Wenchuan earthquake (Zhao et al 2008). Moreover, the ionospheric plasma critical frequency (foF2) has been found diminished before big earthquakes (Pulinets et al 1998; Liu et al 2006). But little has been done regarding ionospheric irregularities and its association with earthquake. Still it is difficult to understand real mechanism between ionospheric anomalies activities and its precursor for the huge earthquakes. The M9.0 Tohoku earthquake, happened on 11 March 2011, at 05:46 UT time, was recognized as one of the most dominant events in related research field (Liu et al 2011). A median geomagnetic disturbance also occurred accompanied with the earthquake, which makes the ionospheric anomalies activities more sophisticated to study. Seismic-ionospheric disturbance was observed due to the drastic activities of earth. To further address the phenomenon, this paper investigates different categories of ionospheric anomalies induced by seismology activity, with multiple data sources. Several GNSS ground data were chosen along epicenter from IGS stations, to discuss the spatial-temporal correlations of ionospheric TECs in regard to the distance of epicenter. We also apply GIM TEC maps due to its global coverage to find diurnal differences of ionospheric anomalies compared with geomagnetic quiet day in the same month. The results in accordance with Liu's conclusions that TECs depletion occurred at days quite near the earthquake day, however the variation of TECs has special regulation contrast to the normal quiet days. Associated with geomagnetic storm at similar time, radio occultation data provided by COSMIC were deeply investigated within the whole month. It's quite different that the storm or earthquake didn't trigger scintillation burst. This is probably due to the storm occurrence local time was in noon sector, which has little impact on ionospheric irregularities increase, but help to enhance the effect of westward electricity, which on the other hand diminishes scintillation bubbles (Li et al 2008). A small geomagnetic disturbance was also found almost a week prior to the earthquake, the relationship of this event to the major earthquake is worth further discussion. Similar analysis of GNSS TECs have been done, the results indicated that it can be also referred as precursor to the major earthquake. Li G, Ning B, Zhao B, et al. Effects of geomagnetic storm on GPS ionospheric scintillations at Sanya[J]. Journal of Atmospheric and Solar-Terrestrial Physics, 2008, 70(7):1034-1045. Liu J Y, Chen Y I, Chuo Y J, et al. A statistical investigation of pre-earthquake ionospheric anomaly[J]. Journal of Geophysical Research Atmospheres, 2006, 111(A5). Liu J Y, Sun Y Y. Seismo-traveling ionospheric disturbances of ionograms observed during the 2011 Mw 9.0 Tohoku Earthquake[J]. Earth, Planets and Space, 2011, 63(7):897-902. Zhao B, Wang M, Yu T, et al. Is an unusual large enhancement of ionospheric electron density linked with the 2008 great Wenchuan earthquake?[J]. Journal of Geophysical Research Atmospheres, 2008, 113(A11):A11304. Pulinets S A. Seismic activity as a source of the ionospheric variability [J]. Advances in Space Research, 1998, 22(6):903-906.
NASA Astrophysics Data System (ADS)
Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.
2014-06-01
The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.
Brittle and ductile friction and the physics of tectonic tremor
Daub, Eric G.; Shelly, David R.; Guyer, Robert A.; Johnson, P.A.
2011-01-01
Observations of nonvolcanic tremor provide a unique window into the mechanisms of deformation and failure in the lower crust. At increasing depths, rock deformation gradually transitions from brittle, where earthquakes occur, to ductile, with tremor occurring in the transitional region. The physics of deformation in the transition region remain poorly constrained, limiting our basic understanding of tremor and its relation to earthquakes. We combine field and laboratory observations with a physical friction model comprised of brittle and ductile components, and use the model to provide constraints on the friction and stress state in the lower crust. A phase diagram is constructed that characterizes under what conditions all faulting behaviors occur, including earthquakes, tremor, silent transient slip, and steady sliding. Our results show that tremor occurs over a range of ductile and brittle frictional strengths, and advances our understanding of the physical conditions at which tremor and earthquakes take place.
Feasibility Study of Earthquake Early Warning in Hawai`i For the Mauna Kea Thirty Meter Telescope
NASA Astrophysics Data System (ADS)
Okubo, P.; Hotovec-Ellis, A. J.; Thelen, W. A.; Bodin, P.; Vidale, J. E.
2014-12-01
Earthquakes, including large damaging events, are as central to the geologic evolution of the Island of Hawai`i as its more famous volcanic eruptions and lava flows. Increasing and expanding development of facilities and infrastructure on the island continues to increase exposure and risk associated with strong ground shaking resulting from future large local earthquakes. Damaging earthquakes over the last fifty years have shaken the most heavily developed areas and critical infrastructure of the island to levels corresponding to at least Modified Mercalli Intensity VII. Hawai`i's most recent damaging earthquakes, the M6.7 Kiholo Bay and M6.0 Mahukona earthquakes, struck within seven minutes of one another off of the northwest coast of the island in October 2006. These earthquakes resulted in damage at all thirteen of the telescopes near the summit of Mauna Kea that led to gaps in telescope operations ranging from days up to four months. With the experiences of 2006 and Hawai`i's history of damaging earthquakes, we have begun a study to explore the feasibility of implementing earthquake early warning systems to provide advanced warnings to the Thirty Meter Telescope of imminent strong ground shaking from future local earthquakes. One of the major challenges for earthquake early warning in Hawai`i is the variety of earthquake sources, from shallow crustal faults to deeper mantle sources, including the basal decollement separating the volcanic pile from the ancient oceanic crust. Infrastructure on the Island of Hawai`i may only be tens of kilometers from these sources, allowing warning times of only 20 s or less. We assess the capability of the current seismic network to produce alerts for major historic earthquakes, and we will provide recommendations for upgrades to improve performance.
NASA Astrophysics Data System (ADS)
Stevens, Victoria
2017-04-01
The 2015 Gorkha-Nepal M7.8 earthquake (hereafter known simply as the Gorkha earthquake) highlights the seismic risk in Nepal, allows better characterization of the geometry of the Main Himalayan Thrust (MHT), and enables comparison of recorded ground-motions with predicted ground-motions. These new data, together with recent paleoseismic studies and geodetic-based coupling models, allow for good parameterization of the fault characteristics. Other faults in Nepal remain less well studied. Unlike previous PSHA studies in Nepal that are exclusively area-based, we use a mix of faults and areas to describe six seismic sources in Nepal. For each source, the Gutenberg-Richter a and b values are found, and the maximum magnitude earthquake estimated, using a combination of earthquake catalogs, moment conservation principals and similarities to other tectonic regions. The MHT and Karakoram fault are described as fault sources, whereas four other sources - normal faulting in N-S trending grabens of northern Nepal, strike-slip faulting in both eastern and western Nepal, and background seismicity - are described as area sources. We use OpenQuake (http://openquake.org/) to carry out the analysis, and peak ground acceleration (PGA) at 2 and 10% chance in 50 years is found for Nepal, along with hazard curves at various locations. We compare this PSHA model with previous area-based models of Nepal. The Main Himalayan Thrust is the principal seismic hazard in Nepal so we study the effects of changing several parameters associated with this fault. We compare ground shaking predicted from various fault geometries suggested from the Gorkha earthquake with each other, and with a simple model of a flat fault. We also show the results from incorporating a coupling model based on geodetic data and microseismicity, which limits the down-dip extent of rupture. There have been no ground-motion prediction equations (GMPEs) developed specifically for Nepal, so we compare the results of standard GMPEs used together with an earthquake-scenario representing that of the Gorkha earthquake, with actual data from the Gorkha earthquake itself. The Gorkha earthquake also highlighted the importance of basin-, topographic- and directivity-effects, and the location of high-frequency sources, on influencing ground motion. Future study aims at incorporating the above, together with consideration of the fault-rupture history and its influence on the location and timing of future earthquakes.
New seismic sources parameterization in El Salvador. Implications to seismic hazard.
NASA Astrophysics Data System (ADS)
Alonso-Henar, Jorge; Staller, Alejandra; Jesús Martínez-Díaz, José; Benito, Belén; Álvarez-Gómez, José Antonio; Canora, Carolina
2014-05-01
El Salvador is located at the pacific active margin of Central America, here, the subduction of the Cocos Plate under the Caribbean Plate at a rate of ~80 mm/yr is the main seismic source. Although the seismic sources located in the Central American Volcanic Arc have been responsible for some of the most damaging earthquakes in El Salvador. The El Salvador Fault Zone is the main geological structure in El Salvador and accommodates 14 mm/yr of horizontal displacement between the Caribbean Plate and the forearc sliver. The ESFZ is a right lateral strike-slip fault zone c. 150 km long and 20 km wide .This shear band distributes the deformation among strike-slip faults trending N90º-100ºE and secondary normal faults trending N120º- N170º. The ESFZ is relieved westward by the Jalpatagua Fault and becomes less clear eastward disappearing at Golfo de Fonseca. Five sections have been proposed for the whole fault zone. These fault sections are (from west to east): ESFZ Western Section, San Vicente Section, Lempa Section, Berlin Section and San Miguel Section. Paleoseismic studies carried out in the Berlin and San Vicente Segments reveal an important amount of quaternary deformation and paleoearthquakes up to Mw 7.6. In this study we present 45 capable seismic sources in El Salvador and their preliminary slip-rate from geological and GPS data. The GPS data detailled results are presented by Staller et al., 2014 in a complimentary communication. The calculated preliminary slip-rates range from 0.5 to 8 mm/yr for individualized faults within the ESFZ. We calculated maximum magnitudes from the mapped lengths and paleoseismic observations.We propose different earthquakes scenario including the potential combined rupture of different fault sections of the ESFZ, resulting in maximum earthquake magnitudes of Mw 7.6. We used deterministic models to calculate acceleration distribution related with maximum earthquakes of the different proposed scenario. The spatial distribution of seismic accelerations are compared and calibrated using the February 13, 2001 earthquake, as control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917. control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917.
New perspectives on self-similarity for shallow thrust earthquakes
NASA Astrophysics Data System (ADS)
Denolle, Marine A.; Shearer, Peter M.
2016-09-01
Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.
Rapid determination of the energy magnitude Me
NASA Astrophysics Data System (ADS)
di Giacomo, D.; Parolai, S.; Bormann, P.; Grosser, H.; Saul, J.; Wang, R.; Zschau, J.
2009-12-01
The magnitude of an earthquake is one of the most used parameters to evaluate the earthquake’s damage potential. Among the non-saturating magnitude scales, the energy magnitude Me is related to a well defined physical parameter of the seismic source, that is the radiated seismic energy Es (e.g. Bormann et al., 2002): Me = 2/3(log10 Es - 4.4). Me is more suitable than the moment magnitude Mw in describing an earthquake's shaking potential (Choy and Kirby, 2004). Indeed, Me is calculated over a wide frequency range of the source spectrum and represents a better measure of the shaking potential, whereas Mw is related to the low-frequency asymptote of the source spectrum and is a good measure of the fault size and hence of the static (tectonic) effect of an earthquake. We analyse teleseismic broadband P-waves signals in the distance range 20°-98° to calculate Es. To correct the frequency-dependent energy loss experienced by the P-waves during the propagation path, we use pre-calculated spectral amplitude decay functions for different frequencies obtained from numerical simulations of Green’s functions (Wang, 1999) given the reference Earth model AK135Q (Kennett et al., 1995; Montagner and Kennett, 1996). By means of these functions the correction for the various propagation effects of the recorded P-wave velocity spectra is performed in a rapid and robust way, and the calculation of ES, and hence of Me, can be computed at the single station. We show that our procedure is suitable for implementation in rapid response systems since it could provide stable Me determinations within 10-15 minutes after the earthquake’s origin time, even in case of great earthquakes. We tested our procedure for a large dataset composed by about 770 earthquakes globally distributed in the Mw range 5.5-9.3 recorded at the broadband stations managed by the IRIS, GEOFON, and GEOSCOPE global networks, as well as other regional seismic networks. Me and Mw express two different aspects of the seismic source, and a combined use of these two magnitude scales would allow a better assessment of the tsunami and shaking potential of an earthquake. Representative case studies will be also shown and discussed. References Bormann, P., Baumbach, M., Bock, G., Grosser, H., Choy, G. L., and Boatwright, J. (2002). Seismic sources and source parameters, in IASPEI New Manual of Seismological Observatory Practice, P. Bormann (Editor), Vol. 1, GeoForschungsZentrum, Potsdam, Chapter 3, 1-94. Choy, G. L., and Kirby, S. (2004). Apparent stress, fault maturity and seismic hazard for normal-fault earthquakes at subduction zones. Geophys. J. Int., 159, 991-1012. Kennett, B. L. N., Engdahl, E. R., and Buland, R. (1995). Constraints on seismic velocities in the Earth from traveltimes. Geophys. J. Int., 122, 108-124. Montagner, J.-P., and Kennett, B. L. N. (1996). How to reconcile body-wave and normal-mode reference Earth models?. Geophys. J. Int., 125, 229-248. Wang, R. (1999). A simple orthonormalization method for stable and efficient computation of Green’s functions. Bull. Seism. Soc. Am., 89(3), 733-741.
Source Mechanisms of Destructive Tsunamigenic Earthquakes occurred along the Major Subduction Zones
NASA Astrophysics Data System (ADS)
Yolsal-Çevikbilen, Seda; Taymaz, Tuncay; Ulutaş, Ergin
2016-04-01
Subduction zones, where an oceanic plate is subducted down into the mantle by tectonic forces, are potential tsunami locations. Many big, destructive and tsunamigenic earthquakes (Mw > 7.5) and high amplitude tsunami waves are observed along the major subduction zones particularly near Indonesia, Japan, Kuril and Aleutan Islands, Gulf of Alaska, Southern America. Not all earthquakes are tsunamigenic; in order to generate a tsunami, the earthquake must occur under or near the ocean, be large, and create significant vertical movements of the seafloor. It is also known that tsunamigenic earthquakes release their energy over a couple of minutes, have long source time functions and slow-smooth ruptures. In this study, we performed point-source inversions by using teleseismic long-period P- and SH- and broad-band P-waveforms recorded by the Federation of Digital Seismograph Networks (FDSN) and the Global Digital Seismograph Network (GDSN) stations. We obtained source mechanism parameters and finite-fault slip distributions of recent destructive ten earthquakes (Mw ≥ 7.5) by comparing the shapes and amplitudes of long period P- and SH-waveforms, recorded in the distance range of 30° - 90°, with synthetic waveforms. We further obtained finite-fault rupture histories of those earthquakes to determine the faulting area (fault length and width), maximum displacement, rupture duration and stress drop. We applied a new back-projection method that uses teleseismic P-waveforms to integrate the direct P-phase with reflected phases from structural discontinuities near the source, and customized it to estimate the spatio-temporal distribution of the seismic energy release of earthquakes. Inversion results exhibit that recent tsunamigenic earthquakes show dominantly thrust faulting mechanisms with small amount of strike-slip components. Their focal depths are also relatively shallow (h < 40 km). As an example, the September 16, 2015 Illapel (Chile) earthquake (Mw: 8.3; h: 26 km) reflects the major characteristics of the Peru-Chile subduction zone between the Nazca and South America Plates. The size, location, depth and focal mechanism of this earthquake are consistent with its occurrence on the megathrust interface in this region. This study is supported by the Scientific and Technological Research Council of Turkey (TUBITAK, Project No: CAYDAG - 114Y066).
NASA Astrophysics Data System (ADS)
Uchide, T.; Shearer, P. M.
2009-12-01
Introduction Uchide and Ide [SSA Spring Meeting, 2009] proposed a new framework for studying the scaling and overall nature of earthquake rupture growth in terms of cumulative moment functions. For better understanding of rupture growth processes, spatiotemporally local processes are also important. The nature of high-frequency (HF) radiation has been investigated for some time, but its role in the earthquake rupture process is still unclear. A wavelet analysis reveals that the HF radiation (e.g., 4 - 32 Hz) of the 2004 Parkfield earthquake is peaky, which implies that the sources of the HF radiation are isolated in space and time. We experiment with applying a matched filter analysis using small template events occurring near the target event rupture area to test whether it can reveal the HF radiation sources for a regular large earthquake. Method We design a matched filter for multiple components and stations. Shelly et al. [2007] attempted identifying low-frequency earthquakes (LFE) in non-volcanic tremor waveforms by stacking the correlation coefficients (CC) between the seismograms of the tremor and the LFE. Differing from their method, our event detection indicator is the CC between the seismograms of the target and template events recorded at the same stations, since the key information for detecting the sources will be the arrival-time differences and the amplitude ratios among stations. Data from both the target and template events are normalized by the maximum amplitude of the seismogram of the template event in the cross-correlation time window. This process accounts for the radiation pattern and distance between the source and stations. At each small earthquake target, high values in the CC time series suggest the possibility of HF radiation during the mainshock rupture from a similar location to the target event. Application to the 2004 Parkfield earthquake We apply the matched filter method to the 2004 Parkfield earthquake (Mw 6.0). We use seismograms recorded at the 13 stations of UPSAR [Fletcher et al, 1992]. At each station, both acceleration and velocity sensors are installed, therefore both large and small earthquakes are observable. We employ 184 earthquakes (M 2.0 - 3.5) as template events, and 0.5 s of the P waves on the vertical components and the S waves on all three components. The data are bandpass-filtered between 4 and 16 Hz. One source is detected at 4 s and 12 km northwest from the hypocenter. Although the CC has generally low values, its peak is more than five times larger than its standard deviation and thus remarkably high. This source is close to the secondary onset revealed by a back-projection analysis of 2 - 8 Hz data from Parkfield strong motion stations [Allmann and Shearer, 2007]. While the back-projection approach images the peak of HF radiation, our method detects the onset time, which is slightly different. Another source is located at 1.2 s and 2 km southeast from the hypocenter, which may correspond to deceleration of the initial rupture. Comparisons of the derived HF radiation sources to the whole rupture process will help us reveal general earthquake source dynamics.
Characterising large scenario earthquakes and their influence on NDSHA maps
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.
2016-04-01
The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.
NASA Astrophysics Data System (ADS)
McLaskey, G. C.; Glaser, S. D.; Thomas, A.; Burgmann, R.
2011-12-01
Repeating earthquake sequences (RES) are thought to occur on isolated patches of a fault that fail in repeated stick-slip fashion. RES enable researchers to study the effect of variations in earthquake recurrence time and the relationship between fault healing and earthquake generation. Fault healing is thought to be the physical process responsible for the 'state' variable in widely used rate- and state-dependent friction equations. We analyze RES created in laboratory stick slip experiments on a direct shear apparatus instrumented with an array of very high frequency (1KHz - 1MHz) displacement sensors. Tests are conducted on the model material polymethylmethacrylate (PMMA). While frictional properties of this glassy polymer can be characterized with the rate- and state- dependent friction laws, the rate of healing in PMMA is higher than room temperature rock. Our experiments show that in addition to a modest increase in fault strength and stress drop with increasing healing time, there are distinct spectral changes in the recorded laboratory earthquakes. Using the impact of a tiny sphere on the surface of the test specimen as a known source calibration function, we are able to remove the instrument and apparatus response from recorded signals so that the source spectrum of the laboratory earthquakes can be accurately estimated. The rupture of a fault that was allowed to heal produces a laboratory earthquake with increased high frequency content compared to one produced by a fault which has had less time to heal. These laboratory results are supported by observations of RES on the Calaveras and San Andreas faults, which show similar spectral changes when recurrence time is perturbed by a nearby large earthquake. Healing is typically attributed to a creep-like relaxation of the material which causes the true area of contact of interacting asperity populations to increase with time in a quasi-logarithmic way. The increase in high frequency seismicity shown here suggests that fault healing produces an increase in fault strength heterogeneity on a small spatial scale. A fault which has healed may possess an asperity population which will allow less slip to be accumulated aseismically, will rupture faster and more violently, and produce more high frequency seismic waves than one which has not healed.
NASA Astrophysics Data System (ADS)
Derode, B.; Riquelme, S.; Ruiz, J. A.; Leyton, F.; Campos, J. A.; Delouis, B.
2014-12-01
The intermediate depth earthquakes of high moment magnitude (Mw ≥ 8) in Chile have had a relative greater impact in terms of damage, injuries and deaths, than thrust type ones with similar magnitude (e.g. 1939, 1950, 1965, 1997, 2003, and 2005). Some of them have been studied in details, showing paucity of aftershocks, down-dip tensional focal mechanisms, high stress-drop and subhorizontal rupture. At present, their physical mechanism remains unclear because ambient temperatures and pressures are expected to lead to ductile, rather than brittle deformation. We examine source characteristics of more than 100 intraslab intermediate depth earthquakes using local and regional waveforms data obtained from broadband and accelerometers stations of IPOC network in northern Chile. With this high quality database, we estimated the total radiated energy from the energy flux carried by P and S waves integrating this flux in time and space, and evaluated their seismic moment directly from both spectral amplitude and near-field waveform inversion methods. We estimated the three parameters Ea, τa and M0 because their estimates entail no model dependence. Interestingly, the seismic nest studied using near-field re-location and only data from stations close to the source (D<250km) appears to not be homogeneous in terms of depths, displaying unusual seismic gaps along the Wadati-Benioff zone. Moreover, as confirmed by other studies of intermediate-depth earthquakes in subduction zones, very high stress drop ( >> 10MPa) and low radiation efficiency in this seismic nest were found. These unusual seismic parameter values can be interpreted as the expression of the loose of a big quantity of the emitted energy by heating processes during the rupture. Although it remains difficult to conclude about the processes of seismic nucleation, we present here results that seem to support a thermal weakening behavior of the fault zones and the existence of thermal stress processes like thermal shear runaway as a preferred mechanism for intermediate earthquake triggering. Despite the non-exhaustive aspect of this study, data presented here lead to the necessity of new systematic near-field studies to obtain valuable conclusions and constrain more accurately the physics of rupture mechanisms of these intermediate-depth seismic event.
Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes
NASA Astrophysics Data System (ADS)
Yamada, M.; Mori, J. J.
2009-12-01
Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.
NASA Astrophysics Data System (ADS)
Bydlon, S. A.; Dunham, E. M.
2016-12-01
Recent increases in seismic activity in historically quiescent areas such as Oklahoma, Texas, and Arkansas, including large, potentially induced events such as the 2011 Mw 5.6 Prague, OK, earthquake, have spurred the need for investigation into expected ground motions associated with these seismic sources. The neoteric nature of this seismicity increase corresponds to a scarcity of ground motion recordings within 50 km of earthquakes Mw 3.0 and greater, with increasing scarcity at larger magnitudes. Gathering additional near-source ground motion data will help better constraints on regional ground motion prediction equations (GMPEs) and will happen over time, but this leaves open the possibility of damaging earthquakes occurring before potential ground shaking and seismic hazard in these areas are properly understood. To aid the effort of constraining near-source GMPEs associated with induced seismicity, we integrate synthetic ground motion data from simulated earthquakes into the process. Using the dynamic rupture and seismic wave propagation code waveqlab3d, we perform verification and validation exercises intended to establish confidence in simulated ground motions for use in constraining GMPEs. We verify the accuracy of our ground motion simulator by performing the PEER/SCEC layer-over-halfspace comparison problem LOH.1 Validation exercises to ensure that we are synthesizing realistic ground motion data include comparisons to recorded ground motions for specific earthquakes in target areas of Oklahoma between Mw 3.0 and 4.0. Using a 3D velocity structure that includes a 1D structure with additional small-scale heterogeneity, the properties of which are based on well-log data from Oklahoma, we perform ground motion simulations of small (Mw 3.0 - 4.0) earthquakes using point moment tensor sources. We use the resulting synthetic ground motion data to develop GMPEs for small earthquakes in Oklahoma. Preliminary results indicate that ground motions can be amplified if the source is located in the shallow, sedimentary sequence compared to the basement. Source depth could therefore be an important variable to define explicitly in GMPEs instead of being incorporated into traditional distance metrics. Future work will include the addition of dynamic sources to develop GMPEs for large earthquakes.
Induced seismicity provides insight into why earthquake ruptures stop
Galis, Martin; Ampuero, Jean Paul; Mai, P. Martin; Cappa, Frédéric
2017-01-01
Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures. PMID:29291250
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richards, Paul G.
A comprehensive ban on nuclear explosive testing is briefly characterized as an arms control initiative related to the Non-Proliferation Treaty. The work of monitoring for nuclear explosions uses several technologies of which the most important is seismology-a physics discipline that draws upon extensive and ever-growing assets to monitor for earthquakes and other ground-motion phenomena as well as for explosions. This paper outlines the basic methods of seismic monitoring within that wider context, and lists web-based and other resources for learning details. It also summarizes the main conclusions, concerning capability to monitor for test-ban treaty compliance, contained in a major studymore » published in March 2012 by the US National Academy of Sciences.« less
NASA Astrophysics Data System (ADS)
Murotani, S.; Satake, K.
2017-12-01
Off Fukushima region, Mjma 7.4 (event A) and 6.9 (event B) events occurred on November 6, 1938, following the thrust fault type earthquakes of Mjma 7.5 and 7.3 on the previous day. These earthquakes were estimated as normal fault earthquakes by Abe (1977, Tectonophysics). An Mjma 7.0 earthquake occurred on July 12, 2014 near event B and an Mjma 7.4 earthquake occurred on November 22, 2016 near event A. These recent events are the only M 7 class earthquakes occurred off Fukushima since 1938. Except for the two 1938 events, normal fault earthquakes have not occurred until many aftershocks of the 2011 Tohoku earthquake. We compared the observed tsunami and seismic waveforms of the 1938, 2014, and 2016 earthquakes to examine the normal fault earthquakes occurred off Fukushima region. It is difficult to compare the tsunami waveforms of the 1938, 2014 and 2016 events because there were only a few observations at the same station. The teleseismic body wave inversion of the 2016 earthquake yielded with the focal mechanism of strike 42°, dip 35°, and rake -94°. Other source parameters were as follows: source area 70 km x 40 km, average slip 0.2 m, maximum slip 1.2 m, seismic moment 2.2 x 1019 Nm, and Mw 6.8. A large slip area is located near the hypocenter, and it is compatible with the tsunami source area estimated from tsunami travel times. The 2016 tsunami source area is smaller than that of the 1938 event, consistent with the difference in Mw: 7.7 for event A estimated by Abe (1977) and 6.8 for the 2016 event. Although the 2014 epicenter is very close to that of event B, the teleseismic waveforms of the 2014 event are similar to those of event A and the 2016 event. While Abe (1977) assumed that the mechanism of event B was the same as event A, the initial motions at some stations are opposite, indicating that the focal mechanisms of events A and B are different and more detailed examination is needed. The normal fault type earthquake seems to occur following the occurrence of M7 9 class thrust type earthquake at the plate boundary off Fukushima region.
Physics of Earthquake Rupture Propagation
NASA Astrophysics Data System (ADS)
Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh
2018-05-01
A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
Lisbon 1755, a multiple-rupture earthquake
NASA Astrophysics Data System (ADS)
Fonseca, J. F. B. D.
2017-12-01
The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the Lisbon earthquake. This may reflect the very long period of surface waves generated by the combined sources as a result of the delays between ruptures. Recognition of this new class of large intraplate earthquakes may pave the way to a better understanding of the mechanisms driving intraplate deformation.
NASA Astrophysics Data System (ADS)
Yolsal-Çevikbilen, Seda; Taymaz, Tuncay
2012-04-01
We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop values (Δσ < 30 bars) for all earthquakes implying typically interplate seismic activity in the region. Further, results of numerical simulations verified that damaging historical tsunamis along the Hellenic subduction zone are able to threaten especially the coastal plains of Crete and Rhodes islands, SW Turkey, Cyprus, Levantine, and Nile Delta-Egypt regions. Thus, we tentatively recommend that special care should be considered in the evaluation of the tsunami risk assessment of the Eastern Mediterranean region for future studies.
NASA Astrophysics Data System (ADS)
Bell, Rebecca; Henrys, Stuart; Sutherland, Rupert; Barker, Daniel; Wallace, Laura; Holden, Caroline; Power, William; Wang, Xiaoming; Morgan, Joanna; Warner, Michael; Downes, Gaye
2015-04-01
Over the last couple of decades we have learned that a whole spectrum of different fault slip behaviour takes place on subduction megathrust faults from stick-slip earthquakes to slow slip and stable sliding. Geophysical data, including seismic reflection data, can be used to characterise margins and fault zones that undergo different modes of slip. In this presentation we will focus on the Hikurangi margin, New Zealand, which exhibits marked along-strike changes in seismic behaviour and margin characteristics. Campaign and continuous GPS measurements reveal deep interseismic coupling and deep slow slip events (~30-60 km) at the southern Hikurangi margin. The northern margin, in contrast, experiences aseismic slip and shallow (<10-15 km) slow slip events (SSE) every 18-24 months with equivalent moment magnitudes of Mw 6.5-6.8. Updip of the SSE region two unusual megathrust earthquakes occurred in March and May 1947 with characteristics typical of tsunami earthquakes. The Hikurangi margin is therefore an excellent natural laboratory to study differential fault slip behaviour. Using 2D seismic reflection, magnetic anomaly and geodetic data we observe in the source areas of the 1947 tsunami earthquakes i) low amplitude interface reflectivity, ii) shallower interface relief, iii) bathymetric ridges, iv) magnetic anomaly highs and in the case of the March 1947 earthquake v) stronger geodetic coupling. We suggest that this is due to the subduction of seamounts, similar in dimensions to seamounts observed on the incoming Pacific plate, to depths of <10 km. We propose a source model for the 1947 tsunami earthquakes based on geophysical data and find that extremely low rupture velocities (c. 300 m/s) are required to model the observed large tsunami run-up heights (Bell et al. 2014, EPSL). Our study suggests that subducted topography can cause the nucleation of moderate earthquakes with complex, low velocity rupture scenarios that enhance tsunami waves, and the role of subducted rough topography in seismic hazard should not be under-estimated. 2D seismic reflection data along the northern Hikurangi margin also image thick (c. 2 km) high-amplitude reflectivity zones (HRZ) coinciding broadly with the source areas of shallow SSEs. The HRZ may be the result of high-fluid content within subduction sediments, suggesting fluids may exert an important control on the generation of SSEs by reducing effective stress (Bell et al. 2010, GJI). However, this hypothesis remains untested. In this presentation, using synthetic models, we will discuss planned future applications of an advanced seismic imaging technique called Full-waveform inversion, integrated with drilling, at subduction margins like Hikurangi to recover fault physical properties at high-resolution in 3D to examine the properties of heterogeneous fault zones.
Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake
Hayes, Gavin P.
2011-01-01
On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.
A phase coherence approach to estimating the spatial extent of earthquakes
NASA Astrophysics Data System (ADS)
Hawthorne, Jessica C.; Ampuero, Jean-Paul
2016-04-01
We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.
NASA Astrophysics Data System (ADS)
Gok, R.; Hutchings, L.
2004-05-01
We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
A rapid estimation of tsunami run-up based on finite fault models
NASA Astrophysics Data System (ADS)
Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.
2014-12-01
Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.
Real time validation of GPS TEC precursor mask for Greece
NASA Astrophysics Data System (ADS)
Pulinets, Sergey; Davidenko, Dmitry
2013-04-01
It was established by earlier studies of pre-earthquake ionospheric variations that for every specific site these variations manifest definite stability in their temporal behavior within the time interval few days before the seismic shock. This self-similarity (characteristic to phenomena registered for processes observed close to critical point of the system) permits us to consider these variations as a good candidate to short-term precursor. Physical mechanism of GPS TEC variations before earthquakes is developed within the framework of Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model. Taking into account the different tectonic structure and different source mechanisms of earthquakes in different regions of the globe, every site has its individual behavior in pre-earthquake activity what creates individual "imprint" on the ionosphere behavior at every given point. Just this so called "mask" of the ionosphere variability before earthquake in the given point creates opportunity to detect anomalous behavior of electron concentration in ionosphere basing not only on statistical processing procedure but applying the pattern recognition technique what facilitates the automatic recognition of short-term ionospheric precursors of earthquakes. Such kind of precursor mask was created using the GPS TEC variation around the time of 9 earthquakes with magnitude from M6.0 till M6.9 which took place in Greece within the time interval 2006-2011. The major anomaly revealed in the relative deviation of the vertical TEC was the positive anomaly appearing at ~04PM UT one day before the seismic shock and lasting nearly 12 hours till ~04AM UT. To validate this approach it was decided to check the mask in real-time monitoring of earthquakes in Greece starting from the 1 of December 2012 for the earthquakes with magnitude more than 4.5. During this period (till 9 of January 2013) 4 cases of seismic shocks were registered, including the largest one M5.7 on 8 of January. For all of them the mask confirmed its validity and 6 of December event was predicted in advance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, R.B.; Nguyen, B.
Earthquake activity in the New Madrid Seismic Zone had been monitored by regional seismic networks since 1975. During this time period, over 3,700 earthquakes have been located within the region bounded by latitudes 35{degrees}--39{degrees}N and longitudes 87{degrees}--92{degrees}W. Most of these earthquakes occur within a 1.5{degrees} x 2{degrees} zone centered on the Missouri Bootheel. Source parameters of larger earthquakes in the zone and in eastern North America are determined using surface-wave spectral amplitudes and broadband waveforms for the purpose of determining the focal mechanism, source depth and seismic moment. Waveform modeling of broadband data is shown to be a powerful toolmore » in defining these source parameters when used complementary with regional seismic network data, and in addition, in verifying the correctness of previously published focal mechanism solutions.« less
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
NASA Astrophysics Data System (ADS)
Muhammad, Ario; Goda, Katsuichiro
2018-03-01
This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.
NASA Astrophysics Data System (ADS)
Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit
2017-10-01
Strong-motion records of recent Gorkha Nepal earthquake ( M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor ( Q β) at recording site. The frequency-dependent Q β( f) = 124 f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.
Choy, G.L.; Boatwright, J.
2007-01-01
The rupture process of the Mw 9.1 Sumatra-Andaman earthquake lasted for approximately 500 sec, nearly twice as long as the teleseismic time windows between the P and PP arrival times generally used to compute radiated energy. In order to measure the P waves radiated by the entire earthquake, we analyze records that extend from the P-wave to the S-wave arrival times from stations at distances ?? >60??. These 8- to 10-min windows contain the PP, PPP, and ScP arrivals, along with other multiply reflected phases. To gauge the effect of including these additional phases, we form the spectral ratio of the source spectrum estimated from extended windows (between TP and TS) to the source spectrum estimated from normal windows (between TP and TPP). The extended windows are analyzed as though they contained only the P-pP-sP wave group. We analyze four smaller earthquakes that occurred in the vicinity of the Mw 9.1 mainshock, with similar depths and focal mechanisms. These smaller events range in magnitude from an Mw 6.0 aftershock of 9 January 2005 to the Mw 8.6 Nias earthquake that occurred to the south of the Sumatra-Andaman earthquake on 28 March 2005. We average the spectral ratios for these four events to obtain a frequency-dependent operator for the extended windows. We then correct the source spectrum estimated from the extended records of the 26 December 2004 mainshock to obtain a complete or corrected source spectrum for the entire rupture process (???600 sec) of the great Sumatra-Andaman earthquake. Our estimate of the total seismic energy radiated by this earthquake is 1.4 ?? 1017 J. When we compare the corrected source spectrum for the entire earthquake to the source spectrum from the first ???250 sec of the rupture process (obtained from normal teleseismic windows), we find that the mainshock radiated much more seismic energy in the first half of the rupture process than in the second half, especially over the period range from 3 sec to 40 sec.
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Harada, Tomoya; Satake, Kenji; Ishibe, Takeo; Gusman, Aditya Riadi
2016-05-01
The July 2015 Mw 7.0 Solomon Islands tsunamigenic earthquake occurred ~40 km north of the February 2013 Mw 8.0 Santa Cruz earthquake. The proximity of the two epicenters provided unique opportunities for a comparative study of their source mechanisms and tsunami generation. The 2013 earthquake was an interplate event having a thrust focal mechanism at a depth of 30 km while the 2015 event was a normal-fault earthquake occurring at a shallow depth of 10 km in the overriding Pacific Plate. A combined use of tsunami and teleseismic data from the 2015 event revealed the north dipping fault plane and a rupture velocity of 3.6 km/s. Stress transfer analysis revealed that the 2015 earthquake occurred in a region with increased Coulomb stress following the 2013 earthquake. Spectral deconvolution, assuming the 2015 tsunami as empirical Green's function, indicated the source periods of the 2013 Santa Cruz tsunami as 10 and 22 min.
NASA Astrophysics Data System (ADS)
Heraud, J. A.; Centa, V. A.; Bleier, T.
2017-12-01
During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2015-04-01
Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.
Petersen, M.D.; Dewey, J.; Hartzell, S.; Mueller, C.; Harmsen, S.; Frankel, A.D.; Rukstales, K.
2004-01-01
The ground motion hazard for Sumatra and the Malaysian peninsula is calculated in a probabilistic framework, using procedures developed for the US National Seismic Hazard Maps. We constructed regional earthquake source models and used standard published and modified attenuation equations to calculate peak ground acceleration at 2% and 10% probability of exceedance in 50 years for rock site conditions. We developed or modified earthquake catalogs and declustered these catalogs to include only independent earthquakes. The resulting catalogs were used to define four source zones that characterize earthquakes in four tectonic environments: subduction zone interface earthquakes, subduction zone deep intraslab earthquakes, strike-slip transform earthquakes, and intraplate earthquakes. The recurrence rates and sizes of historical earthquakes on known faults and across zones were also determined from this modified catalog. In addition to the source zones, our seismic source model considers two major faults that are known historically to generate large earthquakes: the Sumatran subduction zone and the Sumatran transform fault. Several published studies were used to describe earthquakes along these faults during historical and pre-historical time, as well as to identify segmentation models of faults. Peak horizontal ground accelerations were calculated using ground motion prediction relations that were developed from seismic data obtained from the crustal interplate environment, crustal intraplate environment, along the subduction zone interface, and from deep intraslab earthquakes. Most of these relations, however, have not been developed for large distances that are needed for calculating the hazard across the Malaysian peninsula, and none were developed for earthquake ground motions generated in an interplate tectonic environment that are propagated into an intraplate tectonic environment. For the interplate and intraplate crustal earthquakes, we have applied ground-motion prediction relations that are consistent with California (interplate) and India (intraplate) strong motion data that we collected for distances beyond 200 km. For the subduction zone equations, we recognized that the published relationships at large distances were not consistent with global earthquake data that we collected and modified the relations to be compatible with the global subduction zone ground motions. In this analysis, we have used alternative source and attenuation models and weighted them to account for our uncertainty in which model is most appropriate for Sumatra or for the Malaysian peninsula. The resulting peak horizontal ground accelerations for 2% probability of exceedance in 50 years range from over 100% g to about 10% g across Sumatra and generally less than 20% g across most of the Malaysian peninsula. The ground motions at 10% probability of exceedance in 50 years are typically about 60% of the ground motions derived for a hazard level at 2% probability of exceedance in 50 years. The largest contributors to hazard are from the Sumatran faults.
Swarms of repeating stick-slip glacierquakes triggered by snow loading at Mount Rainier volcano
NASA Astrophysics Data System (ADS)
Allstadt, K.; Malone, S. D.; Shean, D. E.; Fahnestock, M. A.; Vidale, J. E.
2013-12-01
We have detected over 150,000 low-frequency (~1-5 Hz) repeating earthquakes over the past decade at Mount Rainier volcano by scanning continuous seismic data from the permanent seismic network. Most of these were previously undetected due to their small size (M<1), shallow locations, and emergent waveforms. The earthquakes are located high (>3000 m) on the glacier-covered part of the edifice. They occur primarily in week- to month-long swarms of activity that strongly correlate with precipitation, namely snowfall, with a lag of about 1-2 days. Furthermore, there is a linear relationship between inter-event repeat time and the size of the subsequent event - consistent with slip-predictable stick-slip behavior. This pattern suggests that the additional load imparted by the sudden added weight of snow during winter storms triggers a temporary change from smooth aseismic sliding to seismic stick-slip basal sliding in locations where basal conditions are close to frictional instability. This sensitivity is analogous to the triggering of repeating earthquakes due to tiny overall stress changes in more traditional tectonic environments (e.g., tremor modulated by tides, dynamic triggering of repeating earthquakes). Using codawave interferometry on stacks of the repeating waveforms of the families with the most events, we found that the sources move at speeds of ~1 m/day. Using a GAMMA ground based radar interferometer, we collected spatially continuous line-of-sight velocities of several glaciers at Mount Rainier in both summer and late fall. We found that the faster parts of the glaciers also move at ~1 m/day or faster, even in late fall. Movement of the sources of these repeating earthquakes at glacial speeds indicates that the asperities are dirty patches that move with the ice rather than stationary bedrock bumps. The reappearance of some event families up to several years apart suggests that certain areas at the base of certain glaciers are prodigious producers of conditions favorable to this behavior. Stick-slip basal sliding of glaciers is supported over other potential moving shallow source mechanisms such as crevassing, unsteady fluid flow, and calving because the source must be non-destructive, highly repeatable at regular intervals, large enough to be detected on multiple stations, lack strong spectral peaks, and have a potential physical tie to the effects of winter precipitation. Identification of the source of these frequent signals offers a view of basal glacier processes, discriminates against alarming volcanic noises, and documents effects of weather on the cryosphere.
Fully probabilistic earthquake source inversion on teleseismic scales
NASA Astrophysics Data System (ADS)
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.
Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas
NASA Astrophysics Data System (ADS)
Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing
2017-12-01
Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.
A new source process for evolving repetitious earthquakes at Ngauruhoe volcano, New Zealand
NASA Astrophysics Data System (ADS)
Jolly, A. D.; Neuberg, J.; Jousset, P.; Sherburn, S.
2012-02-01
Since early 2005, Ngauruhoe volcano has produced repeating low-frequency earthquakes with evolving waveforms and spectral features which become progressively enriched in higher frequency energy during the period 2005 to 2009, with the trend reversing after that time. The earthquakes also show a seasonal cycle since January 2006, with peak numbers of events occurring in the spring and summer period and lower numbers of events at other times. We explain these patterns by the excitation of a shallow two-phase water/gas or water/steam cavity having temporal variations in volume fraction of bubbles. Such variations in two-phase systems are known to produce a large range of acoustic velocities (2-300 m/s) and corresponding changes in impedance contrast. We suggest that an increasing bubble volume fraction is caused by progressive heating of melt water in the resonant cavity system which, in turn, promotes the scattering excitation of higher frequencies, explaining both spectral shift and seasonal dependence. We have conducted a constrained waveform inversion and grid search for moment, position and source geometry for the onset of two example earthquakes occurring 17 and 19 January 2008, a time when events showed a frequency enrichment episode occurring over a period of a few days. The inversion and associated error analysis, in conjunction with an earthquake phase analysis show that the two earthquakes represent an excitation of a single source position and geometry. The observed spectral changes from a stationary earthquake source and geometry suggest that an evolution in both near source resonance and scattering is occurring over periods from days to months.
Observation of the seismic nucleation phase in the Ridgecrest, California, earthquake sequence
Ellsworth, W.L.; Beroza, G.C.
1998-01-01
Near-source observations of five M 3.8-5.2 earthquakes near Ridgecrest, California are consistent with the presence of a seismic nucleation phase. These earthquakes start abruptly, but then slow or stop before rapidly growing again toward their maximum rate of moment release. Deconvolution of instrument and path effects by empirical Green's functions demonstrates that the initial complexity at the start of the earthquake is a source effect. The rapid growth of the P-wave arrival at the start of the seismic nucleation phase supports the conclusion of Mori and Kanamori [1996] that these earthquakes begin without a magnitude-scaled slow initial phase of the type observed by Iio [1992, 1995].
The 2006 Java Earthquake revealed by the broadband seismograph network in Indonesia
NASA Astrophysics Data System (ADS)
Nakano, M.; Kumagai, H.; Miyakawa, K.; Yamashina, T.; Inoue, H.; Ishida, M.; Aoi, S.; Morikawa, N.; Harjadi, P.
2006-12-01
On May 27, 2006, local time, a moderate-size earthquake (Mw=6.4) occurred in central Java. This earthquake caused severe damages near Yogyakarta City, and killed more than 5700 people. To estimate the source mechanism and location of this earthquake, we performed a waveform inversion of the broadband seismograms recorded by a nationwide seismic network in Indonesia (Realtime-JISNET). Realtime-JISNET is a part of the broadband seismograph network developed by an international cooperation among Indonesia, Germany, China, and Japan, aiming at improving the capabilities to monitor seismic activity and tsunami generation in Indonesia. 12 stations in Realitme-JISNET were in operation when the earthquake occurred. We used the three-component seismograms from the two closest stations, which were located about 100 and 300 km from the source. In our analysis, we assumed pure double couple as the source mechanism, thus reducing the number of free parameters in the waveform inversion. Therefore we could stably estimate the source mechanism using the signals observed by a small number of seismic stations. We carried out a grid search with respect to strike, dip, and rake angles to investigate fault orientation and slip direction. We determined source-time functions of the moment-tensor components in the frequency domain for each set of strike, dip, and rake angles. We also conducted a spatial grid search to find the best-fit source location. The best-fit source was approximately 12 km SSE of Yogyakarta at a depth of 10 km below sea level, immediately below the area of extensive damage. The focal mechanism indicates that this earthquake was caused by compressive stress in the NS direction and strike-slip motion was dominant. The moment magnitude (Mw) was 6.4. We estimated the seismic intensity in the areas of severe damage using the source paramters and an empirical attenuation relation for averaged peak ground velocity (PGV) of horizontal seismic motion. We then calculated the instrumental modified Mercalli intensity (Imm) from the estimated PGV values. Our result indicates that strong ground motion with Imm of 7 or more occurred within 10 km of the earthquake fault, although the actual seismic intensity can be affected by shallow structural heterogeneity. We therefore conclude that the severe damages of the Java earthquake are attributed to the strong ground motion, which was primarily caused by the source located immediately below the populated areas.
Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com; Afnimar,; Puspito, Nanang T.
This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994more » Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.« less
PAGER-CAT: A composite earthquake catalog for calibrating global fatality models
Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.
2009-01-01
We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are highly uncertain, particularly the casualty numbers, which must be regarded as estimates rather than firm numbers for many earthquakes. Consequently, we encourage contributions from the seismology and earthquake engineering communities to further improve this resource via the Wikipedia page and personal communications, for the benefit of the whole community.
DOT National Transportation Integrated Search
2016-12-01
A large magnitude long duration subduction earthquake is impending in the Pacific Northwest, which lies near the : Cascadia Subduction Zone (CSZ). Great subduction zone earthquakes are the largest earthquakes in the world and are the sole source : zo...
Testing new methodologies for short -term earthquake forecasting: Multi-parameters precursors
NASA Astrophysics Data System (ADS)
Ouzounov, Dimitar; Pulinets, Sergey; Tramutoli, Valerio; Lee, Lou; Liu, Tiger; Hattori, Katsumi; Kafatos, Menas
2014-05-01
We are conducting real-time tests involving multi-parameter observations over different seismo-tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters, namely: gas discharge; thermal infrared radiation; ionospheric electron density; and atmospheric temperature and humidity, which we believe are all associated with the earthquake preparation phase. We are testing a methodology capable to produce alerts in advance of major earthquakes (M > 5.5) in different regions of active earthquakes and volcanoes. During 2012-2013 we established a collaborative framework with PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) projects for coordinated measurements and prospective validation over seven testing regions: Southern California (USA), Eastern Honshu (Japan), Italy, Greece, Turkey, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a "stress test" opportunity to validate the physical based earthquake precursor approach over regions of high seismicity. Our initial results are: (1) Real-time tests have shown the presence of anomalies in the atmosphere and ionosphere before most of the significant (M>5.5) earthquakes; (2) False positives exist and ratios are different for each region, varying between 50% for (Southern Italy), 35% (California) down to 25% (Taiwan, Kamchatka and Japan) with a significant reduction of false positives as soon as at least two geophysical parameters are contemporarily used; (3) Main problems remain related to the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that real-time testing of physically based pre-earthquake signals provides a short-term predictive power (in all three important parameters, namely location, time and magnitude) for the occurrence of major earthquakes in the tested regions and this result encourages testing to continue with a more detailed analysis of false alarm ratios and understanding of the overall physics of earthquake preparation.
NASA Astrophysics Data System (ADS)
Garagash, I. A.; Lobkovsky, L. I.; Mazova, R. Kh.
2012-04-01
The study of generation of strongest earthquakes with upper-value magnitude (near above 9) and induced by them catastrophic tsunamis, is performed by authors on the basis of new approach to the generation process, occurring in subduction zones under earthquake. The necessity of performing of such studies is connected with recent 11 March 2011 catastrophic underwater earthquake close to north-east Japan coastline and following it catastrophic tsunami which had led to vast victims and colossal damage for Japan. The essential importance in this study is determined by unexpected for all specialists the strength of earthquake occurred (determined by magnitude M = 9), inducing strongest tsunami with wave height runup on the beach up to 10 meters. The elaborated by us model of interaction of ocean lithosphere with island-arc blocks in subduction zones, with taking into account of incomplete stress discharge at realization of seismic process and further accumulation of elastic energy, permits to explain arising of strongest mega-earthquakes, such as catastrophic earthquake with source in Japan deep-sea trench in March, 2011. In our model, the wide possibility for numerical simulation of dynamical behaviour of underwater seismic source is provided by kinematical model of seismic source as well as by elaborated by authors numerical program for calculation of tsunami wave generation by dynamical and kinematical seismic sources. The method obtained permits take into account the contribution of residual tectonic stress in lithosphere plates, leading to increase of earthquake energy, which is usually not taken into account up to date.
Ching, K.-E.; Rau, R.-J.; Zeng, Y.
2007-01-01
A coseismic source model of the 2003 Mw 6.8 Chengkung, Taiwan, earthquake was well determined with 213 GPS stations, providing a unique opportunity to study the characteristics of coseismic displacements of a high-angle buried reverse fault. Horizontal coseismic displacements show fault-normal shortening across the fault trace. Displacements on the hanging wall reveal fault-parallel and fault-normal lengthening. The largest horizontal and vertical GPS displacements reached 153 and 302 mm, respectively, in the middle part of the network. Fault geometry and slip distribution were determined by inverting GPS data using a three-dimensional (3-D) layered-elastic dislocation model. The slip is mainly concentrated within a 44 ?? 14 km slip patch centered at 15 km depth with peak amplitude of 126.6 cm. Results from 3-D forward-elastic model tests indicate that the dome-shaped folding on the hanging wall is reproduced with fault dips greater than 40??. Compared with the rupture area and average slip from slow slip earthquakes and a compilation of finite source models of 18 earthquakes, the Chengkung earthquake generated a larger rupture area and a lower stress drop, suggesting lower than average friction. Hence the Chengkung earthquake seems to be a transitional example between regular and slow slip earthquakes. The coseismic source model of this event indicates that the Chihshang fault is divided into a creeping segment in the north and the locked segment in the south. An average recurrence interval of 50 years for a magnitude 6.8 earthquake was estimated for the southern fault segment. Copyright 2007 by the American Geophysical Union.
Spatial and Temporal Stress Drop Variations of the 2011 Tohoku Earthquake Sequence
NASA Astrophysics Data System (ADS)
Miyake, H.
2013-12-01
The 2011 Tohoku earthquake sequence consists of foreshocks, mainshock, aftershocks, and repeating earthquakes. To quantify spatial and temporal stress drop variations is important for understanding M9-class megathrust earthquakes. Variability and spatial and temporal pattern of stress drop is a basic information for rupture dynamics as well as useful to source modeling. As pointed in the ground motion prediction equations by Campbell and Bozorgnia [2008, Earthquake Spectra], mainshock-aftershock pairs often provide significant decrease of stress drop. We here focus strong motion records before and after the Tohoku earthquake, and analyze source spectral ratios considering azimuth- and distance dependency [Miyake et al., 2001, GRL]. Due to the limitation of station locations on land, spatial and temporal stress drop variations are estimated by adjusting shifts from the omega-squared source spectral model. The adjustment is based on the stochastic Green's function simulations of source spectra considering azimuth- and distance dependency. We assumed the same Green's functions for event pairs for each station, both the propagation path and site amplification effects are cancelled out. Precise studies of spatial and temporal stress drop variations have been performed [e.g., Allmann and Shearer, 2007, JGR], this study targets the relations between stress drop vs. progression of slow slip prior to the Tohoku earthquake by Kato et al. [2012, Science] and plate structures. Acknowledgement: This study is partly supported by ERI Joint Research (2013-B-05). We used the JMA unified earthquake catalogue and K-NET, KiK-net, and F-net data provided by NIED.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Türker, Tuğba, E-mail: tturker@ktu.edu.tr; Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr
North Anatolian Fault (NAF) is one from the most important strike-slip fault zones in the world and located among regions in the highest seismic activity. The NAFZ observed very large earthquakes from the past to present. The aim of this study; the important parameters of Gutenberg-Richter relationship (a and b values) estimated and this parameters taking into account, earthquakes were examined in the between years 1900-2015 for 10 different seismic source regions in the NAFZ. After that estimated occurrence probabilities and return periods of occurring earthquakes in fault zone in the next years, and is being assessed with Poisson methodmore » the earthquake hazard of the NAFZ. The Region 2 were observed the largest earthquakes for the only historical period and hasn’t been observed large earthquake for the instrumental period in this region. Two historical earthquakes (1766, M{sub S}=7.3 and 1897, M{sub S}=7.0) are included for Region 2 (Marmara Region) where a large earthquake is expected in the next years. The 10 different seismic source regions are determined the relationships between the cumulative number-magnitude which estimated a and b parameters with the equation of LogN=a-bM in the Gutenberg-Richter. A homogenous earthquake catalog for M{sub S} magnitude which is equal or larger than 4.0 is used for the time period between 1900 and 2015. The database of catalog used in the study has been created from International Seismological Center (ISC) and Boğazici University Kandilli observation and earthquake research institute (KOERI). The earthquake data were obtained until from 1900 to 1974 from KOERI and ISC until from 1974 to 2015 from KOERI. The probabilities of the earthquake occurring are estimated for the next 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 years in the 10 different seismic source regions. The highest earthquake occur probabilities in 10 different seismic source regions in the next years estimated that the region Tokat-Erzincan (Region 9) %99 with an earthquake occur probability for magnitude 6.5 which the return period 24.7 year, %92 with an earthquake occur probability for magnitude 7 which the return period 39.1 year, %80 with an earthquake occur probability for magnitude 7.5 which the return period 62.1 year, %64 with an earthquake occur probability for magnitude 8 which the return period 98.5 year. For the Marmara Region (Region 2) in the next 100 year estimated that %89 with an earthquake occur probability for magnitude 6 which the return period 44.9 year, %45 with an earthquake occur probability for magnitude 6.5 which the return period 87 year, %45 with an earthquake occur probability for magnitude 7 which the return period 168.6 year.« less
NASA Astrophysics Data System (ADS)
Takemura, Shunsuke; Maeda, Takuto; Furumura, Takashi; Obara, Kazushige
2016-05-01
In this study, the source location of the 30 May 2015 (Mw 7.9) deep-focus Bonin earthquake was constrained using P wave seismograms recorded across Japan. We focus on propagation characteristics of high-frequency P wave. Deep-focus intraslab earthquakes typically show spindle-shaped seismogram envelopes with peak delays of several seconds and subsequent long-duration coda waves; however, both the main shock and aftershock of the 2015 Bonin event exhibited pulse-like P wave propagations with high apparent velocities (~12.2 km/s). Such P wave propagation features were reproduced by finite-difference method simulations of seismic wave propagation in the case of slab-bottom source. The pulse-like P wave seismogram envelopes observed from the 2015 Bonin earthquake show that its source was located at the bottom of the Pacific slab at a depth of ~680 km, rather than within its middle or upper regions.
Atypical soil behavior during the 2011 Tohoku earthquake ( Mw = 9)
NASA Astrophysics Data System (ADS)
Pavlenko, Olga V.
2016-07-01
To understand physical mechanisms of generation of abnormally high peak ground acceleration (PGA; >1 g) during the Tohoku earthquake, models of nonlinear soil behavior in the strong motion were constructed for 27 KiK-net stations located in the near-fault zones to the south of FKSH17. The method of data processing used was developed by Pavlenko and Irikura, Pure Appl Geophys 160:2365-2379, 2003 and previously applied for studying soil behavior at vertical array sites during the 1995 Kobe (Mw = 6.8) and 2000 Tottori (Mw = 6.7) earthquakes. During the Tohoku earthquake, we did not observe a widespread nonlinearity of soft soils and reduction at the beginning of strong motion and recovery at the end of strong motion of shear moduli in soil layers, as usually observed during strong earthquakes. Manifestations of soil nonlinearity and reduction of shear moduli during strong motion were observed at sites located close to the source, in coastal areas. At remote sites, where abnormally high PGAs were recorded, shear moduli in soil layers increased and reached their maxima at the moments of the highest intensity of the strong motion, indicating soil hardening. Then, shear moduli reduced with decreasing the intensity of the strong motion. At soft-soil sites, the reduction of shear moduli was accompanied by a step-like decrease of the predominant frequencies of motion. Evidently, the observed soil hardening at the moments of the highest intensity of the strong motion contributed to the occurrence of abnormally high PGA, recorded during the Tohoku earthquake.
Ogata, Y.; Jones, L.M.; Toda, S.
2003-01-01
Seismic quiescence has attracted attention as a possible precursor to a large earthquake. However, sensitive detection of quiescence requires accurate modeling of normal aftershock activity. We apply the epidemic-type aftershock sequence (ETAS) model that is a natural extension of the modified Omori formula for aftershock decay, allowing further clusters (secondary aftershocks) within an aftershock sequence. The Hector Mine aftershock activity has been normal, relative to the decay predicted by the ETAS model during the 14 months of available data. In contrast, although the aftershock sequence of the 1992 Landers earthquake (M = 7.3), including the 1992 Big Bear earthquake (M = 6.4) and its aftershocks, fits very well to the ETAS up until about 6 months after the main shock, the activity showed clear lowering relative to the modeled rate (relative quiescence) and lasted nearly 7 years, leading up to the Hector Mine earthquake (M = 7.1) in 1999. Specifically, the relative quiescence occurred only in the shallow aftershock activity, down to depths of 5-6 km. The sequence of deeper events showed clear, normal aftershock activity well fitted to the ETAS throughout the whole period. We argue several physical explanations for these results. Among them, we strongly suspect aseismic slips within the Hector Mine rupture source that could inhibit the crustal relaxation process within "shadow zones" of the Coulomb's failure stress change. Furthermore, the aftershock activity of the 1992 Joshua Tree earthquake (M = 6.1) sharply lowered in the same day of the main shock, which can be explained by a similar scenario.
Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory
NASA Astrophysics Data System (ADS)
Xu, Changyi; Sun, Wenke
2014-12-01
In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.
NASA Astrophysics Data System (ADS)
Fine, Isaac V.; Cherniawsky, Josef Y.; Thomson, Richard E.; Rabinovich, Alexander B.; Krassovski, Maxim V.
2015-03-01
A major ( M w 7.7) earthquake occurred on October 28, 2012 along the Queen Charlotte Fault Zone off the west coast of Haida Gwaii (formerly the Queen Charlotte Islands). The earthquake was the second strongest instrumentally recorded earthquake in Canadian history and generated the largest local tsunami ever recorded on the coast of British Columbia. A field survey on the Pacific side of Haida Gwaii revealed maximum runup heights of up to 7.6 m at sites sheltered from storm waves and 13 m in a small inlet that is less sheltered from storms (L eonard and B ednarski 2014). The tsunami was recorded by tide gauges along the coast of British Columbia, by open-ocean bottom pressure sensors of the NEPTUNE facility at Ocean Networks Canada's cabled observatory located seaward of southwestern Vancouver Island, and by several DART stations located in the northeast Pacific. The tsunami observations, in combination with rigorous numerical modeling, enabled us to determine the physical properties of this event and to correct the location of the tsunami source with respect to the initial geophysical estimates. The initial model results were used to specify sites of particular interest for post-tsunami field surveys on the coast of Moresby Island (Haida Gwaii), while field survey observations (L eonard and B ednarski 2014) were used, in turn, to verify the numerical simulations based on the corrected source region.
NASA Astrophysics Data System (ADS)
Power, William; Clark, Kate; King, Darren N.; Borrero, Jose; Howarth, Jamie; Lane, Emily M.; Goring, Derek; Goff, James; Chagué-Goff, Catherine; Williams, James; Reid, Catherine; Whittaker, Colin; Mueller, Christof; Williams, Shaun; Hughes, Matthew W.; Hoyle, Jo; Bind, Jochen; Strong, Delia; Litchfield, Nicola; Benson, Adrian
2017-07-01
The 2016 M w 7.8 Kaikōura earthquake was one of the largest earthquakes in New Zealand's historical record, and it generated the most significant local source tsunami to affect New Zealand since 1947. There are many unusual features of this earthquake from a tsunami perspective: the epicentre was well inland of the coast, multiple faults were involved in the rupture, and the greatest tsunami damage to residential property was far from the source. In this paper, we summarise the tectonic setting and the historical and geological evidence for past tsunamis on this coast, then present tsunami tide gauge and runup field observations of the tsunami that followed the Kaikōura earthquake. For the size of the tsunami, as inferred from the measured heights, the impact of this event was relatively modest, and we discuss the reasons for this which include: the state of the tide at the time of the earthquake, the degree of co-seismic uplift, and the nature of the coastal environment in the tsunami source region.
NASA Astrophysics Data System (ADS)
Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said
2010-05-01
The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.
Earthquake Hoax in Ghana: Exploration of the Cry Wolf Hypothesis
Aikins, Moses; Binka, Fred
2012-01-01
This paper investigated the belief of the news of impending earthquake from any source in the context of the Cry Wolf hypothesis as well as the belief of the news of any other imminent disaster from any source. We were also interested in the correlation between preparedness, risk perception and antecedents. This explorative study consisted of interviews, literature and Internet reviews. Sampling was of a simple random nature. Stratification was carried out by sex and residence type. The sample size of (N=400), consisted of 195 males and 205 Females. Further stratification was based on residential classification used by the municipalities. The study revealed that a person would believe news of an impending earthquake from any source, (64.4%) and a model significance of (P=0.000). It also showed that a person would believe news of any other impending disaster from any source, (73.1%) and a significance of (P=0.003). There is association between background, risk perception and preparedness. Emergency preparedness is weak. Earthquake awareness needs to be re-enforced. There is a critical need for public education of earthquake preparedness. The authors recommend developing emergency response program for earthquakes, standard operating procedures for a national risk communication through all media including instant bulk messaging. PMID:28299086
SeismoDome: Sonic and visual representation of earthquakes and seismic waves in the planetarium
NASA Astrophysics Data System (ADS)
Holtzman, B. K.; Candler, J.; Repetto, D.; Pratt, M. J.; Paté, A.; Turk, M.; Gualtieri, L.; Peter, D. B.; Trakinski, V.; Ebel, D. S. S.; Gossmann, J.; Lem, N.
2017-12-01
Since 2014, we have produced four "Seismodome" public programs in the Hayden Planetarium at the American Museum of Natural History in New York City. To teach the general public about the dynamics of the Earth, we use a range of seismic data (seismicity catalogs, surface and body wave fields, ambient noise, free oscillations) to generate movies and sounds conveying aspects of the physics of earthquakes and seismic waves. The narrative aims to stretch people's sense of time and scale, starting with 2 billion years of convection, then zooming in seismicity over days to twenty years at different length scales, to hours of global seismic wave propagation, all compressed to minute long movies. To optimize the experience in the planetarium, the 180-degree fisheye screen corresponds directly to the surface of the Earth, such that the audience is inside the planet. The program consists of three main elements (1) Using sonified and animated seismicity catalogs, comparison of several years of earthquakes on different plate boundaries conveys the dramatic differences in their dynamics and the nature of great and "normal" earthquakes. (2) Animations of USArray data (based on "Ground Motion Visualizations" methods from IRIS but in 3D, with added sound) convey the basic observations of seismic wave fields, with which we raise questions about what they tell us about earthquake physics and the Earth's interior structure. (3) Movies of spectral element simulations of global seismic wave fields synchronized with sonified natural data push these questions further, especially when viewed from the interior of the planet. Other elements include (4) sounds of the global ambient noise field coupled to movies of mean ocean wave height (related to the noise source) and (5) three months of free oscillations / normal modes ringing after the Tohoku earthquake. We use and develop a wide range of sonification and animation methods, written mostly in python. Flat-screen versions of these movies are available on the Seismic Sound Lab (LDEO) website. Here, we will present a subset of the methods an overview of the aims of the program.
A Fluid-driven Earthquake Cycle, Omori's Law, and Fluid-driven Aftershocks
NASA Astrophysics Data System (ADS)
Miller, S. A.
2015-12-01
Few models exist that predict the Omori's Law of aftershock rate decay, with rate-state friction the only physically-based model. ETAS is a probabilistic model of cascading failures, and is sometimes used to infer rate-state frictional properties. However, the (perhaps dominant) role of fluids in the earthquake process is being increasingly realised, so a fluid-based physical model for Omori's Law should be available. In this talk, I present an hypothesis for a fluid-driven earthquake cycle where dehydration and decarbonization at depth provides continuous sources of buoyant high pressure fluids that must eventually make their way back to the surface. The natural pathway for fluid escape is along plate boundaries, where in the ductile regime high pressure fluids likely play an integral role in episodic tremor and slow slip earthquakes. At shallower levels, high pressure fluids pool at the base of seismogenic zones, with the reservoir expanding in scale through the earthquake cycle. Late in the cycle, these fluids can invade and degrade the strength of the brittle crust and contribute to earthquake nucleation. The mainshock opens permeable networks that provide escape pathways for high pressure fluids and generate aftershocks along these flow paths, while creating new pathways by the aftershocks themselves. Thermally activated precipitation then seals up these pathways, returning the system to a low-permeability environment and effective seal during the subsequent tectonic stress buildup. I find that the multiplicative effect of an exponential dependence of permeability on the effective normal stress coupled with an Arrhenius-type, thermally activated exponential reduction in permeability results in Omori's Law. I simulate this scenario using a very simple model that combines non-linear diffusion and a step-wise increase in permeability when a Mohr Coulomb failure condition is met, and allow permeability to decrease as an exponential function in time. I show very strong spatial correlations of the simulated evolved permeability and fluid pressure field with aftershock hypocenters from this 1992 Landers and 1994 Northridge aftershock sequences, and reproduce the observed aftershock decay rates. Controls on the decay rates (p-value) will also be discussed.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).
GPS-derived Coseismic deformations of the 2016 Aktao Ms6.7 earthquake and source modelling
NASA Astrophysics Data System (ADS)
Li, J.; Zhao, B.; Xiaoqiang, W.; Daiqing, L.; Yushan, A.
2017-12-01
On 25th November 2016, a Ms6.7 earthquake occurred on Aktao, a county of Xinjiang, China. This earthquake was the largest earthquake occurred in the northeastern margin of the Pamir Plateau in the last 30 years. By GPS observation, we get the coseismic displacement of this earthquake. The maximum displacement site is located in the Muji Basin, 15km from south of the causative fault. The maximum deformation is down to 0.12m, and 0.10m for coseismic displacement, our results indicate that the earthquake has the characteristics of dextral strike-slip and normal-fault rupture. Based on the GPS results, we inverse the rupture distribution of the earthquake. The source model is consisted of two approximate independent zones with a depth of less than 20km, the maximum displacement of one zone is 0.6m, the other is 0.4m. The total seismic moment is Mw6.6.1 which is calculated by the geodetic inversion. The source model of GPS-derived is basically consistent with that of seismic waveform inversion, and is consistent with the surface rupture distribution obtained from field investigation. According to our inversion calculation, the recurrence period of strong earthquakes similar to this earthquake should be 30 60 years, and the seismic risk of the eastern segment of Muji fault is worthy of attention. This research is financially supported by National Natural Science Foundation of China (Grant No.41374030)
Barkan, R.; ten Brink, Uri S.; Lin, J.
2009-01-01
The great Lisbon earthquake of November 1st, 1755 with an estimated moment magnitude of 8.5-9.0 was the most destructive earthquake in European history. The associated tsunami run-up was reported to have reached 5-15??m along the Portuguese and Moroccan coasts and the run-up was significant at the Azores and Madeira Island. Run-up reports from a trans-oceanic tsunami were documented in the Caribbean, Brazil and Newfoundland (Canada). No reports were documented along the U.S. East Coast. Many attempts have been made to characterize the 1755 Lisbon earthquake source using geophysical surveys and modeling the near-field earthquake intensity and tsunami effects. Studying far field effects, as presented in this paper, is advantageous in establishing constraints on source location and strike orientation because trans-oceanic tsunamis are less influenced by near source bathymetry and are unaffected by triggered submarine landslides at the source. Source location, fault orientation and bathymetry are the main elements governing transatlantic tsunami propagation to sites along the U.S. East Coast, much more than distance from the source and continental shelf width. Results of our far and near-field tsunami simulations based on relative amplitude comparison limit the earthquake source area to a region located south of the Gorringe Bank in the center of the Horseshoe Plain. This is in contrast with previously suggested sources such as Marqu??s de Pombal Fault, and Gulf of C??diz Fault, which are farther east of the Horseshoe Plain. The earthquake was likely to be a thrust event on a fault striking ~ 345?? and dipping to the ENE as opposed to the suggested earthquake source of the Gorringe Bank Fault, which trends NE-SW. Gorringe Bank, the Madeira-Tore Rise (MTR), and the Azores appear to have acted as topographic scatterers for tsunami energy, shielding most of the U.S. East Coast from the 1755 Lisbon tsunami. Additional simulations to assess tsunami hazard to the U.S. East Coast from possible future earthquakes along the Azores-Iberia plate boundary indicate that sources west of the MTR and in the Gulf of Cadiz may affect the southeastern coast of the U.S. The Azores-Iberia plate boundary west of the MTR is characterized by strike-slip faults, not thrusts, but the Gulf of Cadiz may have thrust faults. Southern Florida seems to be at risk from sources located east of MTR and South of the Gorringe Bank, but it is mostly shielded by the Bahamas. Higher resolution near-shore bathymetry along the U.S. East Coast and the Caribbean as well as a detailed study of potential tsunami sources in the central west part of the Horseshoe Plain are necessary to verify our simulation results. ?? 2008 Elsevier B.V.
Dehydration-driven stress transfer triggers intermediate-depth earthquakes
NASA Astrophysics Data System (ADS)
Ferrand, T. P.; Schubnel, A.; Hilairet, N.; Incel, S.; Deldicque, D.; Labrousse, L.; Gasc, J.; Renner, J.; Wang, Y.; Green, H. W., II
2016-12-01
Intermediate-depth earthquakes (30-300 km) have been extensively documented within subducting oceanic slabs but their physical mechanisms remain enigmatic. Earthquakes occur both in the upper and lower Wadati-Benioff planes of seismicity (UBP and LBP). The LBP is located in the mantle of the subducted oceanic lithosphere, 20-40 km below the plate interface. Several mechanisms have been proposed: dehydration embrittlement of antigorite, shear heating instabilities, and the reactivation of pre-existing shear zones. We dehydrated synthetic antigorite-olivine aggregates, a proxy for serpentinized mantle, during deformation at upper mantle conditions. Acoustic emissions (AEs) were recorded during dehydration of samples with antigorite contents as low as 5 vol.% and with up to 50 vol.%, deformed at pressures of 1.1 GPa and 3.5 GPa, respectively. Source characteristics of these AEs are compatible with faults sealed by fluid-bearing micro-pseudotachylytes in recovered samples, demonstrating that antigorite dehydration triggered dynamic shear failure of the olivine load-bearing network. These intermediate-depth earthquake analogs reconcile the apparent contradictions of previous laboratory studies and confirm that little mantle hydration, as suggested by seismic imaging, may suffice to generate LBP seismicity. We propose an alternative model to dehydration-embrittlement in which dehydration-induced stress transfer, rather than fluid overpressure, is the trigger of mantle rocks embrittlement.
About Block Dynamic Model of Earthquake Source.
NASA Astrophysics Data System (ADS)
Gusev, G. A.; Gufeld, I. L.
One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising deformation the dependence of life-time on noise amplitude is investigated. Also for the initial shock we have chosen the amplitudes, when it determined the life-time, as principal cause. For this case it appeared, that life-time had non-monotonous dependence on the noise amplitude ("temperature"). There was the domain of the "temperatures", where the life-time reached a maximum. The comparison of different dissipation intensities was performed.
NASA Astrophysics Data System (ADS)
Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian
2018-01-01
We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.
Compiling an earthquake catalogue for the Arabian Plate, Western Asia
NASA Astrophysics Data System (ADS)
Deif, Ahmed; Al-Shijbi, Yousuf; El-Hussain, Issa; Ezzelarab, Mohamed; Mohamed, Adel M. E.
2017-10-01
The Arabian Plate is surrounded by regions of relatively high seismicity. Accounting for this seismicity is of great importance for seismic hazard and risk assessments, seismic zoning, and land use. In this study, a homogenous earthquake catalogue of moment-magnitude (Mw) for the Arabian Plate is provided. The comprehensive and homogenous earthquake catalogue provided in the current study spatially involves the entire Arabian Peninsula and neighboring areas, covering all earthquake sources that can generate substantial hazard for the Arabian Plate mainland. The catalogue extends in time from 19 to 2015 with a total number of 13,156 events, of which 497 are historical events. Four polygons covering the entire Arabian Plate were delineated and different data sources including special studies, local, regional and international catalogues were used to prepare the earthquake catalogue. Moment magnitudes (Mw) that provided by original sources were given the highest magnitude type priority and introduced to the catalogues with their references. Earthquakes with magnitude differ from Mw were converted into this scale applying empirical relationships derived in the current or in previous studies. The four polygons catalogues were included in two comprehensive earthquake catalogues constituting the historical and instrumental periods. Duplicate events were identified and discarded from the current catalogue. The present earthquake catalogue was declustered in order to contain only independent events and investigated for the completeness with time of different magnitude spans.
Impact of the 2008 Wenchuan earthquake on river organic carbon provenance: Insight from biomarkers
NASA Astrophysics Data System (ADS)
Wang, Jin; Feng, Xiaojuan; Hilton, Robert; Jin, Zhangdong; Ma, Tian; Zhang, Fei; Li, Gen; Densmore, Alexander; West, A. Joshua
2017-04-01
Large earthquakes can trigger widespread landslides in active mountain belts, which can mobilize biospheric organic carbon (OC) from the soil and vegetation. Rivers can erode and export biospheric particulate organic carbon (POC), which is an export of ecosystem productivity and may result in a CO2 sink if buried in sedimentary deposits. Our previous work showed that the 2008 Mw 7.9 Wenchuan earthquake increased the discharge of biospheric OC by rivers, due to the increased supply by earthquake triggered landslides (Wang et al., 2016). However, while the OC derived from sedimentary rocks could be accounted for, the source of biospheric OC in rivers before and after the earthquake remains poorly constrained. Here we use suspended sediment samples collected from the Zagunao River before and after the Wenchuan earthquake and measured the specific compounds of OC, including fatty acids, lignin phenols and glycerol dialkyl glycerol tetraether (GDGT) lipids. In combination with the analysis of bulk elemental concentration (C and N) and carbon isotopic ratio, the new data shows differential export patterns for OC components derived from varied terrestrial sources. A high frequency sampling enabled us to explore how the biospheric OC source changes following the earthquake, helping to better understand the link between active tectonics and the carbon cycle. Our results are also important in revealing how sedimentary biomarker records may record past earthquakes.
Estimation of ground motion for Bhuj (26 January 2001; Mw 7.6 and for future earthquakes in India
Singh, S.K.; Bansal, B.K.; Bhattacharya, S.N.; Pacheco, J.F.; Dattatrayam, R.S.; Ordaz, M.; Suresh, G.; ,; Hough, S.E.
2003-01-01
Only five moderate and large earthquakes (Mw ???5.7) in India-three in the Indian shield region and two in the Himalayan arc region-have given rise to multiple strong ground-motion recordings. Near-source data are available for only two of these events. The Bhuj earthquake (Mw 7.6), which occurred in the shield region, gave rise to useful recordings at distances exceeding 550 km. Because of the scarcity of the data, we use the stochastic method to estimate ground motions. We assume that (1) S waves dominate at R < 100 km and Lg waves at R ??? 100 km, (2) Q = 508f0.48 is valid for the Indian shield as well as the Himalayan arc region, (3) the effective duration is given by fc-1 + 0.05R, where fc is the corner frequency, and R is the hypocentral distance in kilometer, and (4) the acceleration spectra are sharply cut off beyond 35 Hz. We use two finite-source stochastic models. One is an approximate model that reduces to the ??2-source model at distances greater that about twice the source dimension. This model has the advantage that the ground motion is controlled by the familiar stress parameter, ????. In the other finite-source model, which is more reliable for near-source ground-motion estimation, the high-frequency radiation is controlled by the strength factor, sfact, a quantity that is physically related to the maximum slip rate on the fault. We estimate ???? needed to fit the observed Amax and Vmax data of each earthquake (which are mostly in the far field). The corresponding sfact is obtained by requiring that the predicted curves from the two models match each other in the far field up to a distance of about 500 km. The results show: (1) The ???? that explains Amax data for shield events may be a function of depth, increasing from ???50 bars at 10 km to ???400 bars at 36 km. The corresponding sfact values range from 1.0-2.0. The ???? values for the two Himalayan arc events are 75 and 150 bars (sfact = 1.0 and 1.4). (2) The ???? required to explain Vmax data is, roughly, half the corresponding value for Amax, while the same sfact explains both sets of data. (3) The available far-field Amax and Vmax data for the Bhuj mainshock are well explained by ???? = 200 and 100 bars, respectively, or, equivalently, by sfact = 1.4. The predicted Amax and Vmax in the epicentral region of this earthquake are 0.80 to 0.95 g and 40 to 55 cm/sec, respectively.
Studies of earthquakes and microearthquakes using near-field seismic and geodetic observations
NASA Astrophysics Data System (ADS)
O'Toole, Thomas Bartholomew
The Centroid-Moment Tensor (CMT) method allows an optimal point-source description of an earthquake to be recovered from a set of seismic observations, and, for over 30 years, has been routinely applied to determine the location and source mechanism of teleseismically recorded earthquakes. The CMT approach is, however, entirely general: any measurements of seismic displacement fields could, in theory, be used within the CMT inversion formulation, so long as the treatment of the earthquake as a point source is valid for that data. We modify the CMT algorithm to enable a variety of near-field seismic observables to be inverted for the source parameters of an earthquake. The first two data types that we implement are provided by Global Positioning System receivers operating at sampling frequencies of 1,Hz and above. When deployed in the seismic near field, these instruments may be used as long-period-strong-motion seismometers, recording displacement time series that include the static offset. We show that both the displacement waveforms, and static displacements alone, can be used to obtain CMT solutions for moderate-magnitude earthquakes, and that performing analyses using these data may be useful for earthquake early warning. We also investigate using waveform recordings - made by conventional seismometers deployed at the surface, or by geophone arrays placed in boreholes - to determine CMT solutions, and their uncertainties, for microearthquakes induced by hydraulic fracturing. A similar waveform inversion approach could be applied in many other settings where induced seismicity and microseismicity occurs..
NASA Astrophysics Data System (ADS)
Badawy, Ahmed; Horváth, Frank; Tóth, László
2001-01-01
From January 1995 to December 1997, about 74 earthquakes were located in the Pannonian basin and digitally recorded by a recently established network of seismological stations in Hungary. On reviewing the notable events, about 12 earthquakes were reported as felt with maximum intensity varying between 4 and 6 MSK. The dynamic source parameters of these earthquakes have been derived from P-wave displacement spectra. The displacement source spectra obtained are characterised by relatively small values of corner frequency ( f0) ranging between 2.5 and 10 Hz. The seismic moments change from 1.48×10 20 to 1.3×10 23 dyne cm, stress drops from 0.25 to 76.75 bar, fault length from 0.42 to 1.7 km and relative displacement from 0.05 to 15.35 cm. The estimated source parameters suggest a good agreement with the scaling law for small earthquakes. The small values of stress drops in the studied earthquakes can be attributed to the low strength of crustal materials in the Pannonian basin. However, the values of stress drops are not different for earthquake with thrust or normal faulting focal mechanism solutions. It can be speculated that an increase of the seismic activity in the Pannonian basin can be predicted in the long run because extensional development ceased and structural inversion is in progress. Seismic hazard assessment is a delicate job due to the inadequate knowledge of the seismo-active faults, particularly in the interior part of the Pannonian basin.
Stress Drop and Its Relationship to Radiated Energy, Ground Motion and Uncertainty
NASA Astrophysics Data System (ADS)
Baltay, A.
2014-12-01
Despite the seemingly diverse circumstances under which crustal earthquakes occur, scale-invariant stress drop and apparent stress, the ratio of radiated seismic energy to moment, is observed. The magnitude-independence of these parameters is central to our understanding of both earthquake physics and strong ground motion genesis. Estimates of stress drop and radiated energy, however, display large amounts of scatter potentially masking any secondary trends in the data. We investigate sources of this uncertainty within the framework of constant stress drop and apparent stress. We first re-visit estimates of energy and stress drop from a variety of earthquake observations and methods, for events ranging from magnitude ~2 to ~9. Using an empirical Green's function (eGf) deconvolution method, which removes the path and site effects, radiated energy and Brune stress drop are estimated for both regional events in the western US and Eastern Honshu, Japan from the HiNet network, as well as teleseismically recorded global great earthquakes [Baltay et al., 2010, 2011, 2014]. In addition to eGf methods, ground-motion based metrics for stress drop are considered, using both KikNet data from Japan [Baltay et al., 2013] and the NGA-West2 data, a very well curated ground-motion database. Both the eGf-based stress drop estimates and those from the NGA-West2 database show a marked decrease in scatter, allowing us to identify deterministic secondary trends in stress drop. We find both an increasing stress drop with depth, as well as a larger stress drop of about 30% on average for mainshock events as compared to on-fault aftershocks. While both of these effects are already included in some ground-motion prediction equations (GMPE), many previous seismological studies have been unable to conclusively uncover these trends because of their considerable scatter. Elucidating these effects in the context of reduced and quantified epistemic uncertainty can help both seismologists and engineers to understand the true aleatory variability of the earthquake source, which may be due to the complex and diverse circumstances under which these earthquake occur and which we are yet unable to model.
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant
2012-01-01
QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders.
Seismic Sources for the Territory of Georgia
NASA Astrophysics Data System (ADS)
Tsereteli, N. S.; Varazanashvili, O.
2011-12-01
The southern Caucasus is an earthquake prone region where devastating earthquakes have repeatedly caused significant loss of lives, infrastructure and buildings. High geodynamic activity of the region expressed in both seismic and aseismic deformations, is conditioned by the still-ongoing convergence of lithospheric plates and northward propagation of the Afro-Arabian continental block at a rate of several cm/year. The geometry of tectonic deformations in the region is largely determined by the wedge-shaped rigid Arabian block intensively intended into the relatively mobile Middle East-Caucasian region. Georgia is partner of ongoing regional project EMME. The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment. In this approach the first parameter requirement is the definition of seismic source zones. Seismic sources can be either faults or area sources. Seismoactive structures of Georgia are identified mainly on the basis of the correlation between neotectonic structures of the region and earthquakes. Requirements of modern PSH software to geometry of faults is very high. As our knowledge of active faults geometry is not sufficient, area sources were used. Seismic sources are defined as zones that are characterized with more or less uniform seismicity. Poor knowledge of the processes occurring in deep of the Earth is connected with complexity of direct measurement. From this point of view the reliable data obtained from earthquake fault plane solution is unique for understanding the character of a current tectonic life of investigated area. There are two methods of identification if seismic sources. The first is the seimsotectonic approach, based on identification of extensive homogeneous seismic sources (SS) with the definition of probability of occurrence of maximum earthquake Mmax. In the second method the identification of seismic sources will be obtained on the bases of structural geology, parameters of seismicity and seismotectonics. This last approach was used by us. For achievement of this purpose it was necessary to solve following problems: to calculate the parameters of seismotectonic deformation; to reveal regularities in character of earthquake fault plane solution; use obtained regularities to develop principles of an establishment of borders between various hierarchical and scale levels of seismic deformations fields and to give their geological interpretation; Three dimensional matching of active faults with real geometrical dimension and earthquake sources have been investigated. Finally each zone have been defined with the parameters: the geometry, the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes
Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting
NASA Astrophysics Data System (ADS)
Donovan, J.; Jordan, T. H.
2011-12-01
Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.
NASA Astrophysics Data System (ADS)
Zha, X.; Dai, Z.; Lu, Z.
2015-12-01
The 2011 Hawthorne earthquake swarm occurred in the central Walker Lane zone, neighboring the border between California and Nevada. The swarm included an Mw 4.4 on April 13, Mw 4.6 on April 17, and Mw 3.9 on April 27. Due to the lack of the near-field seismic instrument, it is difficult to get the accurate source information from the seismic data for these moderate-magnitude events. ENVISAT InSAR observations captured the deformation mainly caused by three events during the 2011 Hawthorne earthquake swarm. The surface traces of three seismogenic sources could be identified according to the local topography and interferogram phase discontinuities. The epicenters could be determined using the interferograms and the relocated earthquake distribution. An apparent earthquake migration is revealed by InSAR observations and the earthquake distribution. Analysis and modeling of InSAR data show that three moderate magnitude earthquakes were produced by slip on three previously unrecognized faults in the central Walker Lane. Two seismogenic sources are northwest striking, right-lateral strike-slip faults with some thrust-slip components, and the other source is a northeast striking, thrust-slip fault with some strike-slip components. The former two faults are roughly parallel to each other, and almost perpendicular to the latter one. This special spatial correlation between three seismogenic faults and nature of seismogenic faults suggest the central Walker Lane has been undergoing southeast-northwest horizontal compressive deformation, consistent with the region crustal movement revealed by GPS measurement. The Coulomb failure stresses on the fault planes were calculated using the preferred slip model and the Coulomb 3.4 software package. For the Mw4.6 earthquake, the Coulomb stress change caused by the Mw4.4 event increased by ~0.1 bar. For the Mw3.9 event, the Coulomb stress change caused by the Mw4.6 earthquake increased by ~1.0 bar. This indicates that the preceding earthquake may trigger the subsequence one. Because no abnormal volcano activity was observed during the 2011 Hawthorne earthquake swarm, we can rule out the volcano activity to induce these events. However, the groundwater change and mining in the epicentral zone may contribute to the 2011 Hawthorne earthquake.
Investigation of Potential Triggered Tremor in Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Gonzalez-Huizar, H.; Velasco, A. A.; Peng, Z.
2012-12-01
Recent observations have shown that seismic waves generate transient stresses capable of triggering earthquakes and tectonic (or non-volcanic) tremor far away from the original earthquake source. However, the mechanisms behind remotely triggered seismicity still remain unclear. Triggered tremor signals can be particularly useful in investigating remote triggering processes, since in many cases, the tremor pulses are very clearly modulated by the passing surface waves. The temporal stress changes (magnitude and orientation) caused by seismic waves at the tremor source region can be calculated and correlated with tremor pulses, which allows for exploring the stresses involved in the triggering process. Some observations suggest that triggered and ambient tremor signals are generated under similar physical conditions; thus, investigating triggered tremor might also provide important clues on how and under what conditions ambient tremor signals generate. In this work we present some of the results and techniques we employ in the research of potential cases of triggered tectonic tremor in Latin America and the Caribbean. This investigation includes: (1) the triggered tremor detection, with the use of specific signal filters; (2) localization of the sources, using uncommon techniques like time reversal signals; (3) and the analysis of the stress conditions under which they are generated, by modeling the triggering waves related dynamic stress. Our results suggest that tremor can be dynamically triggered by both Love and Rayleigh waves and in broad variety of tectonic environments depending strongly on the dynamic stress amplitude and orientation. Investigating remotely triggered seismicity offers the opportunity to improve our knowledge about deformation mechanisms and the physics of rupture.
The music of earthquakes and Earthquake Quartet #1
Michael, Andrew J.
2013-01-01
Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.
What Can We Learn from a Simple Physics-Based Earthquake Simulator?
NASA Astrophysics Data System (ADS)
Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele
2018-03-01
Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.
Seismic gaps and source zones of recent large earthquakes in coastal Peru
Dewey, J.W.; Spence, W.
1979-01-01
The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station adjustments computed by the method of joint hypocenter determination. ?? 1979 Birkha??user Verlag.
Urban Earthquake Shaking and Loss Assessment
NASA Astrophysics Data System (ADS)
Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.
2009-04-01
This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation of accelerometric and other macroseismic data (similar to the USGS ShakeMap System). The building inventory data for the Level 2 analysis will consist of grid (geo-cell) based urban building and demographic inventories. For building grouping the European building typology developed within the EU-FP5 RISK-EU project is used. The building vulnerability/fragility relationships to be used can be user selected from a list of applicable relationships developed on the basis of a comprehensive study, Both empirical and analytical relationships (based on the Coefficient Method, Equivalent Linearization Method and the Reduction Factor Method of analysis) can be employed. Casualties in Level 2 analysis are estimated based on the number of buildings in different damaged states and the casualty rates for each building type and damage level. Modifications to the casualty rates can be used if necessary. ELER Level 2 analysis will include calculation of direct monetary losses as a result building damage that will allow for repair-cost estimations and specific investigations associated with earthquake insurance applications (PML and AAL estimations). ELER Level 2 analysis loss results obtained for Istanbul for a scenario earthquake using different techniques will be presented with comparisons using different earthquake damage assessment software. The urban earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation, related Monte-Carlo type simulations and eathquake insurance applications.
Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling
NASA Astrophysics Data System (ADS)
Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea
2017-12-01
The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.
NASA Astrophysics Data System (ADS)
Monsalve-Jaramillo, Hugo; Valencia-Mina, William; Cano-Saldaña, Leonardo; Vargas, Carlos A.
2018-05-01
Source parameters of four earthquakes located within the Wadati-Benioff zone of the Nazca plate subducting beneath the South American plate in Colombia were determined. The seismic moments for these events were recalculated and their approximate equivalent rupture area, slip distribution and stress drop were estimated. The source parameters for these earthquakes were obtained by deconvolving multiple events through teleseismic analysis of body waves recorded in long period stations and with simultaneous inversion of P and SH waves. The calculated source time functions for these events showed different stages that suggest that these earthquakes can reasonably be thought of being composed of two subevents. Even though two of the overall focal mechanisms obtained yielded similar results to those reported by the CMT catalogue, the two other mechanisms showed a clear difference compared to those officially reported. Despite this, it appropriate to mention that the mechanisms inverted in this work agree well with the expected orientation of faulting at that depth as well as with the wave forms they are expected to produce. In some of the solutions achieved, one of the two subevents exhibited a focal mechanism considerably different from the total earthquake mechanism; this could be interpreted as the result of a slight deviation from the overall motion due the complex stress field as well as the possibility of a combination of different sources of energy release analogous to the ones that may occur in deeper earthquakes. In those cases, the subevents with very different focal mechanism compared to the total earthquake mechanism had little contribution to the final solution and thus little contribution to the total amount of energy released.
NASA Astrophysics Data System (ADS)
Mendoza, M.; Ghosh, A.; Rai, S. S.
2017-12-01
The devastation brought on by the Mw 7.8 Gorkha earthquake in Nepal on 25 April 2015, reconditioned people to the high earthquake risk along the Himalayan arc. It is therefore imperative to learn from the Gorkha earthquake, and gain a better understanding of the state of stress in this fault regime, in order to identify areas that could produce the next devastating earthquake. Here, we focus on what is known as the "central Himalaya seismic gap". It is located in Uttarakhand, India, west of Nepal, where a large (> Mw 7.0) earthquake has not occurred for over the past 200 years [Rajendran, C.P., & Rajendran, K., 2005]. This 500 - 800 km long along-strike seismic gap has been poorly studied, mainly due to the lack of modern and dense instrumentation. It is especially concerning since it surrounds densely populated cities, such as New Delhi. In this study, we analyze a rich seismic dataset from a dense network consisting of 50 broadband stations, that operated between 2005 and 2012. We use the STA/LTA filter technique to detect earthquake phases, and the latest tools contributed to the Antelope software environment, to develop a large and robust earthquake catalog containing thousands of precise hypocentral locations, magnitudes, and focal mechanisms. By refining those locations in HypoDD [Waldhauser & Ellsworth, 2000] to form a tighter cluster of events using relative relocation, we can potentially illustrate fault structures in this region with high resolution. Additionally, using ZMAP [Weimer, S., 2001], we perform a variety of statistical analyses to understand the variability and nature of seismicity occurring in the region. Generating a large and consistent earthquake catalog not only brings to light the physical processes controlling the earthquake cycle in an Himalayan seismogenic zone, it also illustrates how stresses are building up along the décollment and the faults that stem from it. With this new catalog, we aim to reveal fault structure, study seismicity patterns, and assess the potential seismic hazard of the central Himalaya seismic gap.
Earthquake-induced ground failures in Italy from a reviewed database
NASA Astrophysics Data System (ADS)
Martino, S.; Prestininzi, A.; Romeo, R. W.
2013-05-01
A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground-level changes triggered by earthquakes of Mercalli intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (URL: http://www.ceri.uniroma1.it/cn/index.do?id=230&page=55) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the "Sapienza" University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground-level changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2007-07-10
The objectives of this study are to improve low-magnitude regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge of source scaling at small magnitudes (i.e., m{sub b}more » < {approx}4.0) is poorly resolved. It is not clear whether different studies obtain contradictory results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods. We begin by developing and improving the two different methods, and then in future years we will apply them both to each set of earthquakes. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But there are only a limited number of earthquakes that are recorded locally, by sufficient stations to give good azimuthal coverage, and have very closely located smaller earthquakes that can be used as an empirical Green's function (EGF) to remove path effects. In contrast, coda waves average radiation from all directions so single-station records should be adequate, and previous work suggests that the requirements for the EGF event are much less stringent. We can study more earthquakes using the coda-wave methods, while using direct wave methods for the best recorded subset of events so as to investigate any differences between the results of the two approaches. Finding 'perfect' EGF events for direct wave analysis is difficult, as is ascertaining the quality of a particular EGF event. We develop a multi-taper method to obtain time-domain source-time-functions by frequency division. If an earthquake and EGF event pair are able to produce a clear, time-domain source pulse then we accept the EGF event. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We use the well-recorded sequence of aftershocks of the M5 Au Sable Forks, NY, earthquake to test the method and also to obtain some of the first accurate source parameters for small earthquakes in eastern North America. We find that the stress drops are high, confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We simplify and improve the coda wave analysis method by calculating spectral ratios between different sized earthquakes. We first compare spectral ratio performance between local and near-regional S and coda waves in the San Francisco Bay region for moderate-sized events. The average spectral ratio standard deviations using coda are {approx}0.05 to 0.12, roughly a factor of 3 smaller than direct S-waves for 0.2 < f < 15.0 Hz. Also, direct wave analysis requires collocated pairs of earthquakes whereas the event-pairs (Green's function and target events) can be separated by {approx}25 km for coda amplitudes without any appreciable degradation. We then apply coda spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks. We observe a clear departure from self-similarity, consistent with previous studies using similar regional datasets.« less
Study of the Seismic Source in the Jalisco Block
NASA Astrophysics Data System (ADS)
Gutierrez, Q. J.; Escudero, C. R.; Nunez-Cornu, F. J.; Ochoa, J.; Cruz, L. H.
2013-05-01
The direct measure of the earthquake fault dimension and the orientation, as well as the direction of slip represent a complicated task nevertheless a better approach is using the seismic waves spectrum and the direction of P-first motions observed at each station. With these methods we can estimate the seismic source parameters like the stress drop, the corner frequency which is linked to the rupture duration time, the fault radius (For the particular case of a circular fault), the rupture area, the seismic moment , the moment magnitude and the focal mechanisms. The study area where were estimated the source parameters comprises the complex tectonic configuration of Jalisco block, that is delimited by the mesoamerican trench at the west, the Colima grabben to the south, and the Tepic-Zacoalco to the north The data was recorded by the MARS network (Mapping the Riviera Subduction Zone) and the RESAJ network. MARS had 50 stations and settled in the Jalisco block; for a period of time, of January 1, 2006 until June, 2007, the magnitude range of these was between 3 to 6.5 MB. RESJAL has 10 stations and is within the state of Jalisco, began to record since October 2011 and continues to record. Before of apply the method we firs remove the trend, the mean and the instrument response and we corrected for attenuation; then manually chosen the S wave, the multitaper method was used to obtain the spectrum of this wave and so estimate the corner frequency and the spectra level. We substitute the obtained in the equations of the Brune model to calculate the source parameters. To calculate focal mechanisms HASH software was used which determines the most likely mechanism. The main propose of this study is estimate earthquake seismic source parameters with the objective of that helps to understand the physics of earthquake rupture mechanism in the area.
Kirby, Stephen; Scholl, David; von Huene, Roland E.; Wells, Ray
2013-01-01
Tsunami modeling has shown that tsunami sources located along the Alaska Peninsula segment of the Aleutian-Alaska subduction zone have the greatest impacts on southern California shorelines by raising the highest tsunami waves for a given source seismic moment. The most probable sector for a Mw ~ 9 source within this subduction segment is between Kodiak Island and the Shumagin Islands in what we call the Semidi subduction sector; these bounds represent the southwestern limit of the 1964 Mw 9.2 Alaska earthquake rupture and the northeastern edge of the Shumagin sector that recent Global Positioning System (GPS) observations indicate is currently creeping. Geological and geophysical features in the Semidi sector that are thought to be relevant to the potential for large magnitude, long-rupture-runout interplate thrust earthquakes are remarkably similar to those in northeastern Japan, where the destructive Mw 9.1 tsunamigenic earthquake of 11 March 2011 occurred. In this report we propose and justify the selection of a tsunami source seaward of the Alaska Peninsula for use in the Tsunami Scenario that is part of the U.S. Geological Survey (USGS) Science Application for Risk Reduction (SAFRR) Project. This tsunami source should have the potential to raise damaging tsunami waves on the California coast, especially at the ports of Los Angeles and Long Beach. Accordingly, we have summarized and abstracted slip distribution from the source literature on the 2011 event, the best characterized for any subduction earthquake, and applied this synoptic slip distribution to the similar megathrust geometry of the Semidi sector. The resulting slip model has an average slip of 18.6 m and a moment magnitude of Mw = 9.1. The 2011 Tohoku earthquake was not anticipated, despite Japan having the best seismic and geodetic networks in the world and the best historical record in the world over the past 1,500 years. What was lacking was adequate paleogeologic data on prehistoric earthquakes and tsunamis, a data gap that also presently applies to the Alaska Peninsula and the Aleutian Islands. Quantitative appraisal of potential tsunami sources in Alaska requires such investigations.
Seismological investigation of September 09 2016, North Korea underground nuclear test
NASA Astrophysics Data System (ADS)
Gaber, H.; Elkholy, S.; Abdelazim, M.; Hamama, I. H.; Othman, A. S.
2017-12-01
On Sep. 9, 2016, a seismic event of mb 5.3 took place in North Korea. This event was reported as a nuclear test. In this study, we applied a number of discriminant techniques that facilitate the ability to distinguish between explosions and earthquakes on the Korean Peninsula. The differences between explosions and earthquakes are due to variation in source dimension, epicenter depth and source mechanism, or a collection of them. There are many seismological differences between nuclear explosions and earthquakes, but not all of them are detectable at large distances or are appropriate to each earthquake and explosion. The discrimination methods used in the current study include the seismic source location, source depth, the differences in the frequency contents, complexity versus spectral ratio and Ms-mb differences for both earthquakes and explosions. Sep. 9, 2016, event is located in the region of North Korea nuclear test site at a zero depth, which is likely to be a nuclear explosion. Comparison between the P wave spectra of the nuclear test and the Sep. 8, 2000, North Korea earthquake, mb 4.9 shows that the spectrum of both events is nearly the same. The results of applying the theoretical model of Brune to P wave spectra of both explosion and earthquake show that the explosion manifests larger corner frequency than the earthquake, reflecting the nature of the different sources. The complexity and spectral ratio were also calculated from the waveform data recorded at a number of stations in order to investigate the relation between them. The observed classification percentage of this method is about 81%. Finally, the mb:Ms method is also investigated. We calculate mb and Ms for the Sep. 9, 2016, explosion and compare the result with the mb: Ms chart obtained from the previous studies. This method is working well with the explosion.
NASA Astrophysics Data System (ADS)
Lui, S. K. Y.; Huang, Y.
2017-12-01
A clear understanding of the source physics of induced seismicity is the key to effective seismic hazard mitigation. In particular, resolving their rupture processes can shed lights on the stress state prior to the main shock, as well as ground motion response. Recent numerical models suggest that, compared to their tectonic counterpart, induced earthquake rupture is more prone to propagate unilaterally toward the injection well where fluid pressure is high. However, this is also dependent on the location of the injection relative to the fault and yet to be compared with field data. In this study, we utilize the rich pool of seismic data in the central US to constrain the rupture processes of major induced earthquakes. By implementing a forward-modeling method, we take smaller earthquake recordings as empirical Green's functions (eGf) to simulate the rupture direction of the beginning motion generated by large events. One advantage of the empirical approach is to bypass the fundamental difficulty in resolving path and site effects. We selected eGf events that are close to the target events both in space and time. For example, we use a Mw 3.6 aftershock approximately 3 km from the 2011 Mw 5.7 earthquake in Prague, OK as its eGf event. Preliminary results indicate a southwest rupture for the Prague main shock, which possibly implies a higher fluid pressure concentration on the northeast end of the fault prior to the rupture. We will present further results on other Mw > 4.5 earthquakes in the States of Oklahoma and Kansas. With additional seismic stations installed in the past few years, events such as the 2014 Mw 4.9 Milan earthquake and the 2016 Mw 5.8 Pawnee earthquake are potential candidates with useful eGfs, as they both have good data coverage and a substantial number of aftershocks nearby. We will discuss the implication of our findings for the causative relationships between the injection operations and the induced rupture process.
NASA Astrophysics Data System (ADS)
Mendoza, Carlos
1993-05-01
The distributions and depths of coseismic slip are derived for the October 25, 1981 Playa Azul and September 21, 1985 Zihuatanejo earthquakes in western Mexico by inverting the recorded teleseismic body waves. Rupture during the Playa Azul earthquake appears to have occurred in two separate zones both updip and downdip of the point of initial nucleation, with most of the slip concentrated in a circular region of 15-km radius downdip from the hypocenter. Coseismic slip occurred entirely within the area of reduced slip between the two primary shallow sources of the Michoacan earthquake that occurred on September 19, 1985, almost 4 years later. The slip of the Zihuatanejo earthquake was concentrated in an area adjacent to one of the main sources of the Michoacan earthquake and appears to be the southeastern continuation of rupture along the Cocos-North America plate boundary. The zones of maximum slip for the Playa Azul, Zihuatanejo, and Michoacan earthquakes may be considered asperity regions that control the occurrence of large earthquakes along the Michoacan segment of the plate boundary.
NASA Astrophysics Data System (ADS)
Verdecchia, A.; Harrington, R. M.; Kirkpatrick, J. D.
2017-12-01
Many observations suggest that duration and size scale in a self-similar way for most earthquakes. Deviations from the expected scaling would suggest that some physical feature on the fault surface influences the speed of rupture differently at different length scales. Determining whether differences in scaling exist between small and large earthquakes is complicated by the fact that duration estimates of small earthquakes are often distorted by travel-path and site effects. However, when carefully estimated, scaling relationships between earthquakes may provide important clues about fault geometry and the spatial scales over which it affects fault rupture speed. The Mw 6.9, 20 August 1999, Quepos earthquake occurred on the plate boundary thrust fault along southern Costa Rica margin where the subducting seafloor is cut by numerous normal faults. The mainshock and aftershock sequence were recorded by land and (partially by) ocean bottom (OBS) seismic arrays deployed as part of the CRSEIZE experiment. Here we investigate the size-duration scaling of the mainshock and relocated aftershocks on the plate boundary to determine if a change in scaling exists that is consistent with a change in fault surface geometry at a specific length scale. We use waveforms from 5 short-period land stations and 12 broadband OBS stations to estimate corner frequencies (the inverse of duration) and seismic moment for several aftershocks on the plate interface. We first use spectral amplitudes of single events to estimate corner frequencies and seismic moments. We then adopt a spectral ratio method to correct for non-source-related effects and refine the corner frequency estimation. For the spectral ratio approach, we use pairs of earthquakes with similar waveforms (correlation coefficient > 0.7), with waveform similarity implying event co-location. Preliminary results from single spectra show similar corner frequency values among events of 0.5 ≤ M ≤ 3.6, suggesting a decrease in static stress drop with magnitude. Our next step is to refine corner frequency estimates using spectral ratios to see if the trend in corner frequency persists with small events, and to extend the magnitude range of the estimations using land-based recordings of the mainshock and two largest aftershocks, which occurred prior to the Osa array deployment.
Tremor evidence for dynamically triggered creep events on the deep San Andreas Fault
NASA Astrophysics Data System (ADS)
Peng, Z.; Shelly, D. R.; Hill, D. P.; Aiken, C.
2010-12-01
Deep tectonic tremor has been observed along major subduction zones and the San Andreas fault (SAF) in central and southern California. It appears to reflect deep fault slip, and it is often seen to be triggered by small stresses, including passing seismic waves from large regional and teleseismic earthquakes. Here we examine tremor activity along the Parkfield-Cholame section of the SAF from mid-2001 to early 2010, scrutinizing its relationship with regional and teleseismic earthquakes. Based on similarities in the shape and timing of seismic waveforms, we conclude that triggered and ambient tremor share common sources and a common physical mechanism. Utilizing this similarity in waveforms, we detect tremor triggered by numerous large events, including previously unreported triggering from the recent 2009 Mw7.3 Honduras, 2009 Mw8.1 Samoa, and 2010 Mw8.8 Chile earthquakes at teleseismic distances, and the relatively small 2007 Mw5.4 Alum Rock and 2008 Mw5.4 Chino Hills earthquakes at regional distances. We also find multiple examples of systematic migration in triggered tremor, similar to ambient tremor migration episodes observed at other times. Because these episodes propagate much more slowly than the triggering waves, the migration likely reflects a small, triggered creep event. As with ambient tremor bursts, triggered tremor at times persists for multiple days, probably indicating a somewhat larger creep event. This activity provides a clear example of delayed dynamic triggering, with a mechanism perhaps also relevant for triggering of regular earthquakes.
An integrated investigation of the induced seismicity near Crooked Lake, Alberta, Canada in 2016
NASA Astrophysics Data System (ADS)
Wang, R.; Gu, Y. J.; Shen, J.; Schultz, R.
2016-12-01
In the past three years, the Crooked Lake (or Fox Creek) region has become one of the most seismically active areas in the Western Canada Sedimentary Basin (WCSB), mostly attributable to hydraulic-fracturing operations on shale gas. Among the human-related earthquakes, the January 12, 2016 event (M = 4.1) not only triggered the "red light" provincial protocol, leading to the temporary suspension of a near-by injection well, but also set a new magnitude record for earthquakes in Alberta in the last decade. In this study, we determine the source parameters (e.g., magnitude, hypocenter location) of this earthquake and its aftershocks using full moment tensor inversions. Our findings are consistent with the anthropogenic origin of this earthquake and the source solution of the main shock shows a strike-slip mechanism with limited non-double-couple components ( 22%). The candidate fault orientations, which are predominantly N-S and E-W trending, are consistent with those of earlier events in this region but different from induced events in other parts in the WCSB. The inferred compressional axis is supported by crustal stress orientations extracted from bore-hole breakouts and the right-lateral fault is preferred by both seismic and aeromagnetic data. A further analysis of the waveforms from the near-source stations (<10 km) detected nearly 100 pre-/aftershocks within a week of this earthquake. Systematic differences in the waveforms between earthquake multiples before and after the master event suggest moderate changes of seismic velocity structures at the injection depth around the source area, possibly a reflection of fluid migration and/or changes in stress field. In short, our integrated study on the January 2016 earthquake cluster offers critical insights on the nature of induced earthquakes in the Crooked Lake region and other parts of the WCSB.
NASA Astrophysics Data System (ADS)
OpršAl, Ivo; FäH, Donat; Mai, P. Martin; Giardini, Domenico
2005-04-01
The Basel earthquake of 18 October 1356 is considered one of the most serious earthquakes in Europe in recent centuries (I0 = IX, M ≈ 6.5-6.9). In this paper we present ground motion simulations for earthquake scenarios for the city of Basel and its vicinity. The numerical modeling combines the finite extent pseudodynamic and kinematic source models with complex local structure in a two-step hybrid three-dimensional (3-D) finite difference (FD) method. The synthetic seismograms are accurate in the frequency band 0-2.2 Hz. The 3-D FD is a linear explicit displacement formulation using an irregular rectangular grid including topography. The finite extent rupture model is adjacent to the free surface because the fault has been recognized through trenching on the Reinach fault. We test two source models reminiscent of past earthquakes (the 1999 Athens and the 1989 Loma Prieta earthquake) to represent Mw ≈ 5.9 and Mw ≈ 6.5 events that occur approximately to the south of Basel. To compare the effect of the same wave field arriving at the site from other directions, we considered the same sources placed east and west of the city. The local structural model is determined from the area's recently established P and S wave velocity structure and includes topography. The selected earthquake scenarios show strong ground motion amplification with respect to a bedrock site, which is in contrast to previous 2-D simulations for the same area. In particular, we found that the edge effects from the 3-D structural model depend strongly on the position of the earthquake source within the modeling domain.
NASA Astrophysics Data System (ADS)
Amertha Sanjiwani, I. D. M.; En, C. K.; Anjasmara, I. M.
2017-12-01
A seismic gap on the interface along the Sunda subduction zone has been proposed among the 2000, 2004, 2005 and 2007 great earthquakes. This seismic gap therefore plays an important role in the earthquake risk on the Sunda trench. The Mw 7.6 Padang earthquake, an intraslab event, was occurred on September 30, 2009 located at ± 250 km east of the Sunda trench, close to the seismic gap on the interface. To understand the interaction between the seismic gap and the Padang earthquake, twelves continuous GPS data from SUGAR are adopted in this study to estimate the source model of this event. The daily GPS coordinates one month before and after the earthquake were calculated by the GAMIT software. The coseismic displacements were evaluated based on the analysis of coordinate time series in Padang region. This geodetic network provides a rather good spatial coverage for examining the seismic source along the Padang region in detail. The general pattern of coseismic horizontal displacements is moving toward epicenter and also the trench. The coseismic vertical displacement pattern is uplift. The highest coseismic displacement derived from the MSAI station are 35.0 mm for horizontal component toward S32.1°W and 21.7 mm for vertical component. The second largest one derived from the LNNG station are 26.6 mm for horizontal component toward N68.6°W and 3.4 mm for vertical component. Next, we will use uniform stress drop inversion to invert the coseismic displacement field for estimating the source model. Then the relationship between the seismic gap on the interface and the intraslab Padang earthquake will be discussed in the next step. Keyword: seismic gap, Padang earthquake, coseismic displacement.
Scaling Relations of Earthquakes on Inland Active Mega-Fault Systems
NASA Astrophysics Data System (ADS)
Murotani, S.; Matsushima, S.; Azuma, T.; Irikura, K.; Kitagawa, S.
2010-12-01
Since 2005, The Headquarters for Earthquake Research Promotion (HERP) has been publishing 'National Seismic Hazard Maps for Japan' to provide useful information for disaster prevention countermeasures for the country and local public agencies, as well as promote public awareness of disaster prevention of earthquakes. In the course of making the year 2009 version of the map, which is the commemorate of the tenth anniversary of the settlement of the Comprehensive Basic Policy, the methods to evaluate magnitude of earthquakes, to predict strong ground motion, and to construct underground structure were investigated in the Earthquake Research Committee and its subcommittees. In order to predict the magnitude of earthquakes occurring on mega-fault systems, we examined the scaling relations for mega-fault systems using 11 earthquakes of which source processes were analyzed by waveform inversion and of which surface information was investigated. As a result, we found that the data fit in between the scaling relations of seismic moment and rupture area by Somerville et al. (1999) and Irikura and Miyake (2001). We also found that maximum displacement of surface rupture is two to three times larger than the average slip on the seismic fault and surface fault length is equal to length of the source fault. Furthermore, compiled data of the source fault shows that displacement saturates at 10m when fault length(L) is beyond 100km, L>100km. By assuming the fault width (W) to be 18km in average of inland earthquakes in Japan, and the displacement saturate at 10m for length of more than 100 km, we derived a new scaling relation between source area and seismic moment, S[km^2] = 1.0 x 10^-17 M0 [Nm] for mega-fault systems that seismic moment (M0) exceeds 1.8×10^20 Nm.
Theory of time-dependent rupture in the Earth
NASA Technical Reports Server (NTRS)
Das, S.; Scholz, C. H.
1980-01-01
Fracture mechanics is used to develop a theory of earthquake mechanism which includes the phenomenon of subcritical crack growth. The following phenomena are predicted: slow earthquakes, multiple events, delayed multiple events (doublets), postseismic rupture growth and afterslip, foreshocks, and aftershocks. The theory predicts a nucleation stage prior to an earthquake, and suggests a physical mechanism by which one earthquake may 'trigger' another.
NASA Astrophysics Data System (ADS)
McCloskey, John
2008-03-01
The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Seismic hazard analysis for Jayapura city, Papua
NASA Astrophysics Data System (ADS)
Robiana, R.; Cipta, A.
2015-04-01
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock type and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 - 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
NASA Astrophysics Data System (ADS)
Lapusta, N.
2011-12-01
Studying earthquake source processes is a multidisciplinary endeavor involving a number of subjects, from geophysics to engineering. As a solid mechanician interested in understanding earthquakes through physics-based computational modeling and comparison with observations, I need to educate and attract students from diverse areas. My CAREER award has provided the crucial support for the initiation of this effort. Applying for the award made me to go through careful initial planning in consultation with my colleagues and administration from two divisions, an important component of the eventual success of my path to tenure. Then, the long-term support directed at my program as a whole - and not a specific year-long task or subject area - allowed for the flexibility required for a start-up of a multidisciplinary undertaking. My research is directed towards formulating realistic fault models that incorporate state-of-the-art experimental studies, field observations, and analytical models. The goal is to compare the model response - in terms of long-term fault behavior that includes both sequences of simulated earthquakes and aseismic phenomena - with observations, to identify appropriate constitutive laws and parameter ranges. CAREER funding has enabled my group to develop a sophisticated 3D modeling approach that we have used to understand patterns of seismic and aseismic fault slip on the Sunda megathrust in Sumatra, investigate the effect of variable hydraulic properties on fault behavior, with application to Chi-Chi and Tohoku earthquake, create a model of the Parkfield segment of the San Andreas fault that reproduces both long-term and short-term features of the M6 earthquake sequence there, and design experiments with laboratory earthquakes, among several other studies. A critical ingredient in this research program has been the fully integrated educational component that allowed me, on the one hand, to expose students from different backgrounds to the multidisciplinary knowledge required for research in my group and, on the other hand, to communicate the field insights to a broader community. Newly developed course on Dynamic Fracture and Frictional Faulting has combined geophysical and engineering knowledge at the forefront of current research activities relevant to earthquake studies and involved students in these activities through team-based course projects. The course attracts students from more than ten disciplines and received a student rating of 4.8/5 this past academic year. In addition, the course on Continuum Mechanics was enriched with geophysical references and examples. My group has also been visiting physics classrooms in a neighboring public school that serve mostly underrepresented minorities. The visits were beneficial not only to the high school students but also for graduate students and postdocs in my group, who got experience in presenting their field in a way accessible for the general public. Overall, the NSF CAREER award program through the Geosciences Directorate (NSF official Eva E. Zanzerkia) has significantly facilitated my development as a researcher and educator and should be either maintained or expanded.
NASA Astrophysics Data System (ADS)
Somei, K.; Asano, K.; Iwata, T.; Miyakoshi, K.
2012-12-01
After the 1995 Kobe earthquake, many M7-class inland earthquakes occurred in Japan. Some of those events (e.g., the 2004 Chuetsu earthquake) occurred in a tectonic zone which is characterized as a high strain rate zone by the GPS observation (Sagiya et al., 2000) or dense distribution of active faults. That belt-like zone along the coast in Japan Sea side of Tohoku and Chubu districts, and north of Kinki district, is called as the Niigata-Kobe tectonic zone (NKTZ, Sagiya et al, 2000). We investigate seismic scaling relationship for recent inland crustal earthquake sequences in Japan and compare source characteristics between events occurring inside and outside of NKTZ. We used S-wave coda part for estimating source spectra. Source spectral ratio is obtained by S-wave coda spectral ratio between the records of large and small events occurring close to each other from nation-wide strong motion network (K-NET and KiK-net) and broad-band seismic network (F-net) to remove propagation-path and site effects. We carefully examined the commonality of the decay of coda envelopes between event-pair records and modeled the observed spectral ratio by the source spectral ratio function with assuming omega-square source model for large and small events. We estimated the corner frequencies and seismic moment (ratio) from those modeled spectral ratio function. We determined Brune's stress drops of 356 events (Mw: 3.1-6.9) in ten earthquake sequences occurring in NKTZ and six sequences occurring outside of NKTZ. Most of source spectra obey omega-square source spectra. There is no obvious systematic difference between stress drops of events in NKTZ zone and others. We may conclude that the systematic tendency of seismic source scaling of the events occurred inside and outside of NKTZ does not exist and the average source scaling relationship can be effective for inland crustal earthquakes. Acknowledgements: Waveform data were provided from K-NET, KiK-net and F-net operated by National Research Institute for Earth Science and Disaster Prevention Japan. This study is supported by Multidisciplinary research project for Niigata-Kobe tectonic zone promoted by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
Precursory changes in seismic velocity for the spectrum of earthquake failure modes
Scuderi, M.M.; Marone, C.; Tinti, E.; Di Stefano, G.; Collettini, C.
2016-01-01
Temporal changes in seismic velocity during the earthquake cycle have the potential to illuminate physical processes associated with fault weakening and connections between the range of fault slip behaviors including slow earthquakes, tremor and low frequency earthquakes1. Laboratory and theoretical studies predict changes in seismic velocity prior to earthquake failure2, however tectonic faults fail in a spectrum of modes and little is known about precursors for those modes3. Here we show that precursory changes of wave speed occur in laboratory faults for the complete spectrum of failure modes observed for tectonic faults. We systematically altered the stiffness of the loading system to reproduce the transition from slow to fast stick-slip and monitored ultrasonic wave speed during frictional sliding. We find systematic variations of elastic properties during the seismic cycle for both slow and fast earthquakes indicating similar physical mechanisms during rupture nucleation. Our data show that accelerated fault creep causes reduction of seismic velocity and elastic moduli during the preparatory phase preceding failure, which suggests that real time monitoring of active faults may be a means to detect earthquake precursors. PMID:27597879
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
... Single-Source Grant to Support Services for Haitian Medical Evacuees to the Florida Department of...: Notice to award a single-source grant to support medical evacuees from the Haiti earthquake of 2010. CFDA... supportive social services to Haitian medical evacuees affected by the earthquake in 2010. The Haitian...
Numerical simulation analysis on Wenchuan seismic strong motion in Hanyuan region
NASA Astrophysics Data System (ADS)
Chen, X.; Gao, M.; Guo, J.; Li, Z.; Li, T.
2015-12-01
69227 deaths, 374643 injured, 17923 people missing, direct economic losses 845.1 billion, and a large number houses collapse were caused by Wenchuan Ms8 earthquake in Sichuan Province on May 12, 2008, how to reproduce characteristics of its strong ground motion and predict its intensity distribution, which have important role to mitigate disaster of similar giant earthquake in the future. Taking Yunnan-Sichuan Province, Wenchuan town, Chengdu city, Chengdu basin and its vicinity as the research area, on the basis of the available three-dimensional velocity structure model and newly topography data results from ChinaArray of Institute of Geophysics, China Earthquake Administration, 2 type complex source rupture process models with the global and local source parameters are established, we simulated the seismic wave propagation of Wenchuan Ms8 earthquake throughout the whole three-dimensional region by the GMS discrete grid finite-difference techniques with Cerjan absorbing boundary conditions, and obtained the seismic intensity distribution in this region through analyzing 50×50 stations data (simulated ground motion output station). The simulated results indicated that: (1)Simulated Wenchuan earthquake ground motion (PGA) response and the main characteristics of the response spectrum are very similar to those of the real Wenchuan earthquake records. (2)Wenchuan earthquake ground motion (PGA) and the response spectra of the Plain are much greater than that of the left Mountain area because of the low velocity of the shallow surface media and the basin effect of the Chengdu basin structure. Simultaneously, (3) the source rupture process (inversion) with far-field P-wave, GPS data and InSAR information and the Longmenshan Front Fault (source rupture process) are taken into consideration in GMS numerical simulation, significantly different waveform and frequency component of the ground motion are obtained, though the strong motion waveform is distinct asymmetric, which should be much more real. It indicated that the Longmenshan Front Fault may be also involved in seismic activity during the long time(several minutes) Wenchuan earthquake process. (4) Simulated earthquake records in Hanyuan region are indeed very strong, which reveals source mechanism is one reason of Hanyuan intensity abnormaly.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi
2014-01-01
We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
SEISMIC SOURCE SCALING AND DISCRIMINATION IN DIVERSE TECTONIC ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R E; Mayeda, K; Walter, W R
2008-07-08
The objectives of this study are to improve low-magnitude (concentrating on M2.5-5) regional seismic discrimination by performing a thorough investigation of earthquake source scaling using diverse, high-quality datasets from varied tectonic regions. Local-to-regional high-frequency discrimination requires an estimate of how earthquakes scale with size. Walter and Taylor (2002) developed the MDAC (Magnitude and Distance Amplitude Corrections) method to empirically account for these effects through regional calibration. The accuracy of these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity. Unfortunately our knowledge at small magnitudes (i.e., m{sub b}more » < {approx} 4.0) is poorly resolved, and source scaling remains a subject of on-going debate in the earthquake seismology community. Recently there have been a number of empirical studies suggesting scaling of micro-earthquakes is non-self-similar, yet there are an equal number of compelling studies that would suggest otherwise. It is not clear whether different studies obtain different results because they analyze different earthquakes, or because they use different methods. Even in regions that are well studied, such as test sites or areas of high seismicity, we still rely on empirical scaling relations derived from studies taken from half-way around the world at inter-plate regions. We investigate earthquake sources and scaling from different tectonic settings, comparing direct and coda wave analysis methods that both make use of empirical Green's function (EGF) earthquakes to remove path effects. Analysis of locally recorded, direct waves from events is intuitively the simplest way of obtaining accurate source parameters, as these waves have been least affected by travel through the earth. But finding well recorded earthquakes with 'perfect' EGF events for direct wave analysis is difficult, limits the number of earthquakes that can be studied. We begin with closely-located, well-correlated earthquakes. We use a multi-taper method to obtain time-domain source-time-functions by frequency division. We only accept an earthquake and EGF pair if they are able to produce a clear, time-domain source pulse. We fit the spectral ratios and perform a grid-search about the preferred parameters to ensure the fits are well constrained. We then model the spectral (amplitude) ratio to determine source parameters from both direct P and S waves. We analyze three clusters of aftershocks from the well-recorded sequence following the M5 Au Sable Forks, NY, earthquake to obtain some of the first accurate source parameters for small earthquakes in eastern North America. Each cluster contains a M{approx}2, and two contain M{approx}3, as well as smaller aftershocks. We find that the corner frequencies and stress drops are high (averaging 100 MPa) confirming previous work suggesting that intraplate continental earthquakes have higher stress drops than events at plate boundaries. We also demonstrate that a scaling breakdown suggested by earlier work is simply an artifact of their more band-limited data. We calculate radiated energy, and find that the ratio of Energy to seismic Moment is also high, around 10{sup -4}. We estimate source parameters for the M5 mainshock using similar methods, but our results are more doubtful because we do not have a EGF event that meets our preferred criteria. The stress drop and energy/moment ratio for the mainshock are slightly higher than for the aftershocks. Our improved, and simplified coda wave analysis method uses spectral ratios (as for the direct waves) but relies on the averaging nature of the coda waves to use EGF events that do not meet the strict criteria of similarity required for the direct wave analysis. We have applied the coda wave spectral ratio method to the 1999 Hector Mine mainshock (M{sub w} 7.0, Mojave Desert) and its larger aftershocks, and also to several sequences in Italy with M{approx}6 mainshocks. The Italian earthquakes have higher stress drops than the Hector Mine sequence, but lower than Au Sable Forks. These results show a departure from self-similarity, consistent with previous studies using similar regional datasets. The larger earthquakes have higher stress drops and energy/moment ratios. We perform a preliminary comparison of the two methods using the M5 Au Sable Forks earthquake. Both methods give very consistent results, and we are applying the comparison to further events.« less
NASA Astrophysics Data System (ADS)
Mourhatch, Ramses
This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis. As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California. Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2s-2.0s) empirical Green's function synthetics on top of long-period (> 2.0s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms. Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.
NASA Astrophysics Data System (ADS)
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
NASA Astrophysics Data System (ADS)
Ouzounov, D.; Pulinets, S. A.; Tramutoli, V.; Lee, L.; Liu, J. G.; Hattori, K.; Kafatos, M.
2013-12-01
We are conducting an integrated study involving multi-parameter observations over different seismo- tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters namely: gas discharge; thermal infrared radiation; ionospheric electron concentration; and atmospheric temperature and humidity, which we suppose are associated with earthquake preparation phase. We intended to test in prospective mode the set of geophysical measurements for different regions of active earthquakes and volcanoes. In 2012-13 we established a collaborative framework with the leading projects PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) for coordinate measurements and prospective validation over seven test regions: Southern California (USA), Eastern Honshu (Japan), Italy, Turkey, Greece, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a 'stress test' opportunity to validate the physical based approach in teal -time over regions of high seismicity. Our initial results are: (1) Prospective tests have shown the presence in real time of anomalies in the atmosphere before most of the significant (M>5.5) earthquakes in all regions; (2) False positive rate alarm is different for each region and varying between 50% (Italy, Kamchatka and California) to 25% (Taiwan and Japan) with a significant reduction of false positives when at least two parameters are contemporary used; (3) One of most complex problem, which is still open, was the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that the physical based short-term forecast is feasible and more tests are needed. We discus the physical concept we used, the future integration of data observations and related developments.
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen; Ward, Philip; Daniell, James; Aerts, Jeroen
2017-04-01
In a cross-discipline study, an extensive literature review has been conducted to increase the understanding of vulnerability indicators used in both earthquake- and flood vulnerability assessments, and to provide insights into potential improvements of earthquake and flood vulnerability assessments. It identifies and compares indicators used to quantitatively assess earthquake and flood vulnerability, and discusses their respective differences and similarities. Indicators have been categorized into Physical- and Social categories, and further subdivided into (when possible) measurable and comparable indicators. Physical vulnerability indicators have been differentiated to exposed assets such as buildings and infrastructure. Social indicators are grouped in subcategories such as demographics, economics and awareness. Next, two different vulnerability model types have been described that use these indicators: index- and curve-based vulnerability models. A selection of these models (e.g. HAZUS) have been described, and compared on several characteristics such as temporal- and spatial aspects. It appears that earthquake vulnerability methods are traditionally strongly developed towards physical attributes at an object scale and used in vulnerability curve models, whereas flood vulnerability studies focus more on indicators applied to aggregated land-use scales. Flood risk studies could be improved using approaches from earthquake studies, such as incorporating more detailed lifeline and building indicators, and developing object-based vulnerability curve assessments of physical vulnerability, for example by defining building material based flood vulnerability curves. Related to this, is the incorporation of time of the day based building occupation patterns (at 2am most people will be at home while at 2pm most people will be in the office). Earthquake assessments could learn from flood studies when it comes to the refined selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies to further explore cross-hazard studies.
Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake
NASA Astrophysics Data System (ADS)
Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten
2014-05-01
In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.
A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities
Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.
1999-01-01
A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.
Sensing network for electromagnetic fields generated by seismic activities
NASA Astrophysics Data System (ADS)
Gershenzon, Naum I.; Bambakidis, Gust; Ternovskiy, Igor V.
2014-06-01
The sensors network is becoming prolific and play now increasingly more important role in acquiring and processing information. Cyber-Physical Systems are focusing on investigation of integrated systems that includes sensing, networking, and computations. The physics of the seismic measurement and electromagnetic field measurement requires special consideration how to design electromagnetic field measurement networks for both research and detection earthquakes and explosions along with the seismic measurement networks. In addition, the electromagnetic sensor network itself could be designed and deployed, as a research tool with great deal of flexibility, the placement of the measuring nodes must be design based on systematic analysis of the seismic-electromagnetic interaction. In this article, we review the observations of the co-seismic electromagnetic field generated by earthquakes and man-made sources such as vibrations and explosions. The theoretical investigation allows the distribution of sensor nodes to be optimized and could be used to support existing geological networks. The placement of sensor nodes have to be determined based on physics of electromagnetic field distribution above the ground level. The results of theoretical investigations of seismo-electromagnetic phenomena are considered in Section I. First, we compare the relative contribution of various types of mechano-electromagnetic mechanisms and then analyze in detail the calculation of electromagnetic fields generated by piezomagnetic and electrokinetic effects.
A Bayesian approach to earthquake source studies
NASA Astrophysics Data System (ADS)
Minson, Sarah
Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.
Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment
NASA Astrophysics Data System (ADS)
Melgar Moctezuma, Diego
This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
Review Article: A comparison of flood and earthquake vulnerability assessment indicators
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen C.; Ward, Philip J.; Daniell, James E.; Aerts, Jeroen C. J. H.
2017-07-01
In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquake- and flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of index- and curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.
NASA Astrophysics Data System (ADS)
Liu, B.; Shi, B.
2010-12-01
An earthquake with ML4.1 occurred at Shacheng, Hebei, China, on July 20, 1995, followed by 28 aftershocks with 0.9≤ML≤4.0 (Chen et al, 2005). According to ZÚÑIGA (1993), for the 1995 ML4.1 Shacheng earthquake sequence, the main shock is corresponding to undershoot, while aftershocks should match overshoot. With the suggestion that the dynamic rupture processes of the overshoot aftershocks could be related to the crack (sub-fault) extension inside the main fault. After main shock, the local stresses concentration inside the fault may play a dominant role in sustain the crack extending. Therefore, the main energy dissipation mechanism should be the aftershocks fracturing process associated with the crack extending. We derived minimum radiation energy criterion (MREC) following variational principle (Kanamori and Rivera, 2004)(ES/M0')min≧[3M0/(ɛπμR3)](v/β)3, where ES and M0' are radiated energy and seismic moment gained from observation, μ is the modulus of fault rigidity, ɛ is the parameter of ɛ=M0'/M0,M0 is seismic moment and R is rupture size on the fault, v and β are rupture speed and S-wave speed. From II and III crack extending model, we attempt to reconcile a uniform expression for calculate seismic radiation efficiency ηG, which can be used to restrict the upper limit efficiency and avoid the non-physics phenomenon that radiation efficiency is larger than 1. In ML 4.1 Shacheng earthquake sequence, the rupture speed of the main shock was about 0.86 of S-wave speed β according to MREC, closing to the Rayleigh wave speed, while the rupture speeds of the remained 28 aftershocks ranged from 0.05β to 0.55β. The rupture speed was 0.9β, and most of the aftershocks are no more than 0.35β using II and III crack extending model. In addition, the seismic radiation efficiencies for this earthquake sequence were: for the most aftershocks, the radiation efficiencies were less than 10%, inferring a low seismic efficiency, whereas the radiation efficiency was 78% for the main shock. The essential difference in the earthquake energy partition for the aftershock source dynamics indicated that the fracture energy dissipation could not be ignored in the source parameter estimation for the earthquake faulting, especially for small earthquakes. Otherwise, the radiated seismic energy could be overestimated or underestimated.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Dense Array Studies of Volcano-Tectonic and Long-Period Earthquakes Beneath Mount St. Helens
NASA Astrophysics Data System (ADS)
Glasgow, M. E.; Hansen, S. M.; Schmandt, B.; Thomas, A.
2017-12-01
A 904 single-component 10-Hz geophone array deployed within 15 km of Mount St. Helens (MSH) in 2014 recorded continuously for two-weeks. Automated reverse-time imaging (RTI) was used to generate a catalog of 212 earthquakes. Among these, two distinct types of upper crustal (<8 km) earthquakes were classified. Volcano-tectonic (VT) and long-period (LP) earthquakes were identified using analysis of array spectrograms, envelope functions, and velocity waveforms. To remove analyst subjectivity, quantitative classification criteria were developed based on the ratio of power in high and low frequency bands and coda duration. Prior to the 2014 experiment, upper crustal LP earthquakes had only been reported at MSH during volcanic activity. Subarray beamforming was used to distinguish between LP earthquakes and surface generated LP signals, such as rockfall. This method confirmed 16 LP signals with horizontal velocities exceeding that of upper crustal P-wave velocities, which requires a subsurface hypocenter. LP and VT locations overlap in a cluster slightly east of the summit crater from 0-5 km below sea level. LP displacement spectra are similar to simple theoretical predictions for shear failure except that they have lower corner frequencies than VT earthquakes of similar magnitude. The results indicate a distinct non-resonant source for LP earthquakes which are located in the same source volume as some VT earthquakes (within hypocenter uncertainty of 1 km or less). To further investigate MSH microseismicity mechanisms, a 142 three-component (3-C) 5 Hz geophone array will record continuously for one month at MSH in Fall 2017 providing a unique dataset for a volcano earthquake source study. This array will help determine if LP occurrence in 2014 was transient or if it is still ongoing. Unlike the 2014 array, approximately 50 geophones will be deployed in the MSH summit crater directly over the majority of seismicity. RTI will be used to detect and locate earthquakes by back-projecting 3-C data with a local 3-D P and S velocity model. Earthquakes will be classified using the previously stated techniques, and we will seek to use the dense array of 3-C waveforms to invert for focal mechanisms and, ideally, moment tensor sources down to M0.
A global earthquake discrimination scheme to optimize ground-motion prediction equation selection
Garcia, Daniel; Wald, David J.; Hearne, Michael
2012-01-01
We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.
NASA Astrophysics Data System (ADS)
Gulen, L.; EMME WP2 Team*
2011-12-01
The Earthquake Model of the Middle East (EMME) Project is a regional project of the GEM (Global Earthquake Model) project (http://www.emme-gem.org/). The EMME project covers Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project consists of three main modules: hazard, risk, and socio-economic modules. The EMME project uses PSHA approach for earthquake hazard and the existing source models have been revised or modified by the incorporation of newly acquired data. The most distinguishing aspect of the EMME project from the previous ones is its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that permits continuous update, refinement, and analysis. An up-to-date earthquake catalog of the Middle East region has been prepared and declustered by the WP1 team. EMME WP2 team has prepared a digital active fault map of the Middle East region in ArcGIS format. We have constructed a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. The EMME project database includes information on the geometry and rates of movement of faults in a "Fault Section Database", which contains 36 entries for each fault section. The "Fault Section" concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far 6,991 Fault Sections have been defined and 83,402 km of faults are fully parameterized in the Middle East region. A separate "Paleo-Sites Database" includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library, that includes the pdf files of relevant papers, reports and maps, is also prepared. A logic tree approach is utilized to encompass different interpretations for the areas where there is no consensus. Finally seismic source zones in the Middle East region have been delineated using all available data. *EMME Project WP2 Team: Levent Gülen, Murat Utkucu, M. Dinçer Köksal, Hilal Yalçin, Yigit Ince, Mine Demircioglu, Shota Adamia, Nino Sadradze, Aleksandre Gvencadze, Arkadi Karakhanyan, Mher Avanesyan, Tahir Mammadli, Gurban Yetirmishli, Arif Axundov, Khaled Hessami, M. Asif Khan, M. Sayab.
Stress Wave Source Characterization: Impact, Fracture, and Sliding Friction
NASA Astrophysics Data System (ADS)
McLaskey, Gregory Christofer
Rapidly varying forces, such as those associated with impact, rapid crack propagation, and fault rupture, are sources of stress waves which propagate through a solid body. This dissertation investigates how properties of a stress wave source can be identified or constrained using measurements recorded at an array of sensor sites located far from the source. This methodology is often called the method of acoustic emission and is useful for structural health monitoring and the noninvasive study of material behavior such as friction and fracture. In this dissertation, laboratory measurements of 1--300 mm wavelength stress waves are obtained by means of piezoelectric sensors which detect high frequency (10 kHz--3MHz) motions of a specimen's surface, picometers to nanometers in amplitude. Then, stress wave source characterization techniques are used to study ball impact, drying shrinkage cracking in concrete, and the micromechanics of stick-slip friction of Poly(methyl methacrylate) (PMMA) and rock/rock interfaces. In order to quantitatively relate recorded signals obtained with an array of sensors to a particular stress wave source, wave propagation effects and sensor distortions must be accounted for. This is achieved by modeling the physics of wave propagation and transduction as linear transfer functions. Wave propagation effects are precisely modeled by an elastodynamic Green's function, sensor distortion is characterized by an instrument response function, and the stress wave source is represented with a force moment tensor. These transfer function models are verified though calibration experiments which employ two different mechanical calibration sources: ball impact and glass capillary fracture. The suitability of the ball impact source model, based on Hertzian contact theory, is experimentally validated for small (˜1 mm) balls impacting massive plates composed of four different materials: aluminum, steel, glass, and PMMA. Using this transfer function approach and the two mechanical calibration sources, four types of piezoelectric sensors were calibrated: three commercially available sensors and the Glaser-type conical piezoelectric sensor, which was developed in the Glaser laboratory. The distorting effects of each sensor are modeled using autoregressive-moving average (ARMA) models, and because vital phase information is robustly incorporated into these models, they are useful for simulating or removing sensor-induced distortions, so that a displacement time history can be retrieved from recorded signals. The Glaser-type sensor was found to be very well modeled as a unidirectional displacement sensor which detects stress wave disturbances down to about 1 picometer in amplitude. Finally, the merits of a fully calibrated experimental system are demonstrated in a study of stress wave sources arising from sliding friction, and the relationship between those sources and earthquakes. A laboratory friction apparatus was built for this work which allows the micro-mechanisms of friction to be studied with stress wave analysis. Using an array of 14 Glaser-type sensors, and precise models of wave propagation effects and the sensor distortions, the physical origins of the stress wave sources are explored. Force-time functions and focal mechanisms are determined for discrete events found amid the "noise" of friction. These localized events are interpreted to be the rupture of micrometer-sized contacts, known as asperities. By comparing stress wave sources from stick-slip experiments on plastic/plastic and rock/rock interfaces, systematic differences were found. The rock interface produces very rapid (<1 microsecond) implosive forces indicative of brittle asperity failure and fault gouge formation, while rupture on the plastic interface releases only shear force and produces a source more similar to earthquakes commonly recorded in the field. The difference between the mechanisms is attributed to the vast differences in the hardness and melting temperatures of the two materials, which affect the distribution of asperities as well as their failure behavior. With proper scaling, the strong link between material properties and laboratory earthquakes will aid in our understanding of fault mechanics and the generation of earthquakes and seismic tremor.
NASA Astrophysics Data System (ADS)
Zheng, Y.
2015-12-01
On August 3, 2014, an Ms6.5 earthquake struck Ludian county, Zhaotong city in Yunnan province, China. Although this earthquake is not very big, it caused abnormal severe damages. Thus, study on the causes of the serious damages of this moderate strong earthquake may help us to evaluate seismic hazards for similar earthquakes. Besides the factors which directly relate to the damages, such as site effects, quality of buildings, seismogenic structures and the characteristics of the mainshock and the aftershocks may also responsible for the seismic hazards. Since focal mechanism solution and centroid depth provide key information of earthquake source properties and tectonic stress field, and the focal depth is one of the most important parameters which control the damages of earthquakes, obtaining precise FMSs and focal depths of the Ludian earthquake sequence may help us to determine the detailed geometric features of the rupture fault and the seismogenic environment. In this work we obtained the FMSs and centroid depths of the Ludian earthquake and its Ms>3.0 aftershocks by the revised CAP method, and further verified some focal depths using the depth phase method. Combining the FMSs of the mainshock and the strong aftershocks, as well as their spatial distributions, and the seismogenic environment of the source region, we can make the following characteristics of the Ludian earthquake sequence and its seismogenic structure: (1) The Ludian earthquake is a left-lateral strike slip earthquake, with magnitude of about Mw6.1. The FMS of nodal plane I is 75o/56o/180o for strike, dip and rake angles, and 165o/90o/34ofor the other nodal plane. (2) The Ludian earthquake is very shallow with the optimum centroid depth of ~3 km, which is consistent with the strong ground shaking and the surface rupture observed by field survey and strengthens the damages of the Ludian earthquake. (3) The Ludian Earthquake should occur on the NNW trend BXF. Because two later aftershocks occurred close to the fault zone of the ZLF, and their FMSs are similar with the characteristics of the ZLF, the shallower part of the ZLF may also rupture during the aftershock duration of the Ludian earthquake. Since the ZLF is much longer than the BXF, the seismic risk of the ZLF may be high and should be required more attention.
Sequential Data Assimilation for Seismicity: a Proof of Concept
NASA Astrophysics Data System (ADS)
van Dinther, Ylona; Kuensch, Hans Rudolf; Fichtner, Andreas
2017-04-01
Integrating geological and geophysical observations, laboratory results and physics-based numerical modeling is crucial to improve our understanding of the occurrence of large subduction earthquakes. How to do this integration is less obvious, especially in light of the scarcity and uncertainty of natural and laboratory data and the difficulty of modeling the physics governing earthquakes. One way to efficiently combine information from these sources in order to estimate states and/or parameters is data assimilation, a mathematically sound framework extensively developed for weather forecasting purposes. We demonstrate the potential of using data assimilation by applying an Ensemble Kalman Filter to recover the current and forecast the future state of stress and strength on the megathrust based on data from a single borehole. Data and its errors are for the first time assimilated to - using the least-squares solution of Bayes theorem - update a Partial Differential Equation-driven seismic cycle model. This visco-elasto-plastic continuum forward model solves Navier-Stokes equations with a rate-dependent friction coefficient. To prove this concept we perform a perfect model test in an analogue subduction zone setting. Synthetic numerical data from a single analogue borehole are assimilated into 150 ensemble models. Since we know the true state of the numerical data model, a quantitative and qualitative evaluation shows that meaningful information on the stress and strength is available, even when only data from a single borehole is assimilated over only a part of a seismic cycle. This is possible, since the sampled error covariance matrix contains prior information on the physics that relates velocities, stresses, and pressures at the surface to those at the fault. During the analysis step, stress and strength distributions are thus reconstructed in such a way that fault coupling can be updated to either inhibit or trigger events. In the subsequent forward propagation step the physical equations are solved to propagate the updated states forward in time and thus provide probabilistic information on the occurrence of the next analogue earthquake. At the next assimilation step(s), the systems forecasting ability turns out to be distinctly better than using a periodic model to forecast this simple, quasi-periodic sequence. Combining our knowledge of physical laws with observations thus seems to be a useful tool that could be used to improve probabilistic seismic hazard assessment and increase our physical understanding of the spatiotemporal occurrence of earthquakes, subduction zones, and other Solid Earth systems.
W phase source inversion for moderate to large earthquakes (1990-2010)
Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.
2012-01-01
Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the waveforms of the target event. To deal with such difficult situations, we remove the perturbation caused by earlier disturbing events by subtracting the corresponding synthetics from the data. The CMT parameters for the disturbed event can then be retrieved using the residual seismograms. We also explore the feasibility of obtaining source parameters of smaller earthquakes in the range 6.0 ≤Mw w= 6 or larger.
NASA Astrophysics Data System (ADS)
Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui
2018-02-01
The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.
Earthquake-driven erosion of organic carbon at the eastern margin of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Li, G.; West, A. J.; Hara, E. K.; Hammond, D. E.; Hilton, R. G.
2016-12-01
Large earthquakes can trigger massive landsliding that erodes particulate organic carbon (POC) from vegetation, soil and bedrocks, potentially linking seismotectonics to the global carbon cycle. Recent work (Wang et al., 2016, Geology) has highlighted a dramatic increase in riverine export of biospheric POC following the 2008 Mw7.9 Wenchuan earthquake, in the steep Longmen Shan mountain range at the eastern margin of the Tibetan Plateau. However, a complete, source-to-sink picture of POC erosion after the earthquake is still missing. Here we track POC transfer across the Longmen Shan range from high mountains to the downstream Zipingpu reservoir where riverine-exported POC has been trapped. Building on the work of Wang et al. (2016), who measured the compositions and fluxes of riverine POC, this study is focused on constraining the source and fate of the eroded POC after the earthquake. We have sampled landslide deposits and river sediment, and we have cored the Zipingpu reservoir, following a source-to-sink sampling strategy. We measured POC compositions and grain size of the sediment samples, mapped landslide-mobilized POC using maps of landslide inventory and biomass, and tracked POC loading from landslides to the reservoir sediment to constrain the fate of eroded OC. Constraints on carbon sources, fluxes and fate provide the foundation for constructing a post-earthquake POC budget. This work highlights the role of earthquakes in the mobilization and burial of POC, providing new insight into mechanisms linking tectonics and the carbon cycle and building understanding needed to interpret past seismicity from sedimentary archives.
Spatial Evaluation and Verification of Earthquake Simulators
NASA Astrophysics Data System (ADS)
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
Validating of Atmospheric Signals Associated with some of the Major Earthquakes in Asia (2003-2009)
NASA Technical Reports Server (NTRS)
Ouzounov, D. P.; Pulinets, S.; Liu, J. Y.; Hattori, K.; Oarritm N,; Taylor, P. T.
2010-01-01
The recent catastrophic earthquake in Haiti (January 2010) has provided and renewed interest in the important question of the existence of precursory signals related to strong earthquakes. Latest studies (VESTO workshop in Japan 2009) have shown that there were precursory atmospheric signals observed on the ground and in space associated with several recent earthquakes. The major question, still widely debated in the scientific community is whether such signals systematically precede major earthquakes. To address this problem we have started to validate the anomalous atmospheric signals during the occurrence of large earthquakes. Our approach is based on integration analysis of several physical and environmental parameters (thermal infrared radiation, electron concentration in the ionosphere, Radon/ion activities, air temperature and seismicity) that were found to be associated with earthquakes. We performed hind-cast detection over three different regions with high seismicity Taiwan, Japan and Kamchatka for the period of 2003-2009. We are using existing thermal satellite data (Aqua and POES); in situ atmospheric data (NOAA/NCEP); and ionospheric variability data (GPS/TEC and DEMETER). The first part of this validation included 42 major earthquakes (M greater than 5.9): 10 events in Taiwan, 15 events in Japan, 15 events in Kamchatka and four most recent events for M8.0 Wenchuan earthquake (May 2008) in China and M7.9 Samoa earthquakes (Sep 2009). Our initial results suggest a systematic appearance of atmospheric anomalies near the epicentral area, 1 to 5 days prior to the largest earthquakes, that could be explained by a coupling process between the observed physical parameters, and the earthquake preparation processes.
A phase coherence approach to identifying co-located earthquakes and tremor
NASA Astrophysics Data System (ADS)
Hawthorne, J. C.; Ampuero, J.-P.
2018-05-01
We present and use a phase coherence approach to identify seismic signals that have similar path effects but different source time functions: co-located earthquakes and tremor. The method used is a phase coherence-based implementation of empirical matched field processing, modified to suit tremor analysis. It works by comparing the frequency-domain phases of waveforms generated by two sources recorded at multiple stations. We first cross-correlate the records of the two sources at a single station. If the sources are co-located, this cross-correlation eliminates the phases of the Green's function. It leaves the relative phases of the source time functions, which should be the same across all stations so long as the spatial extent of the sources are small compared with the seismic wavelength. We therefore search for cross-correlation phases that are consistent across stations as an indication of co-located sources. We also introduce a method to obtain relative locations between the two sources, based on back-projection of interstation phase coherence. We apply this technique to analyse two tremor-like signals that are thought to be composed of a number of earthquakes. First, we analyse a 20 s long seismic precursor to a M 3.9 earthquake in central Alaska. The analysis locates the precursor to within 2 km of the mainshock, and it identifies several bursts of energy—potentially foreshocks or groups of foreshocks—within the precursor. Second, we examine several minutes of volcanic tremor prior to an eruption at Redoubt Volcano. We confirm that the tremor source is located close to repeating earthquakes identified earlier in the tremor sequence. The amplitude of the tremor diminishes about 30 s before the eruption, but the phase coherence results suggest that the tremor may persist at some level through this final interval.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.
Structure-specific scalar intensity measures for near-source and ordinary earthquake ground motions
Luco, N.; Cornell, C.A.
2007-01-01
Introduced in this paper are several alternative ground-motion intensity measures (IMs) that are intended for use in assessing the seismic performance of a structure at a site susceptible to near-source and/or ordinary ground motions. A comparison of such IMs is facilitated by defining the "efficiency" and "sufficiency" of an IM, both of which are criteria necessary for ensuring the accuracy of the structural performance assessment. The efficiency and sufficiency of each alternative IM, which are quantified via (i) nonlinear dynamic analyses of the structure under a suite of earthquake records and (ii) linear regression analysis, are demonstrated for the drift response of three different moderate- to long-period buildings subjected to suites of ordinary and of near-source earthquake records. One of the alternative IMs in particular is found to be relatively efficient and sufficient for the range of buildings considered and for both the near-source and ordinary ground motions. ?? 2007, Earthquake Engineering Research Institute.
Database of potential sources for earthquakes larger than magnitude 6 in Northern California
,
1996-01-01
The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.
Source processes of strong earthquakes in the North Tien-Shan region
NASA Astrophysics Data System (ADS)
Kulikova, G.; Krueger, F.
2013-12-01
Tien-Shan region attracts attention of scientists worldwide due to its complexity and tectonic uniqueness. A series of very strong destructive earthquakes occurred in Tien-Shan at the turn of XIX and XX centuries. Such large intraplate earthquakes are rare in seismology, which increases the interest in the Tien-Shan region. The presented study focuses on the source processes of large earthquakes in Tien-Shan. The amount of seismic data is limited for those early times. In 1889, when a major earthquake has occurred in Tien-Shan, seismic instruments were installed in very few locations in the world and these analog records did not survive till nowadays. Although around a hundred seismic stations were operating at the beginning of XIX century worldwide, it is not always possible to get high quality analog seismograms. Digitizing seismograms is a very important step in the work with analog seismic records. While working with historical seismic records one has to take into account all the aspects and uncertainties of manual digitizing and the lack of accurate timing and instrument characteristics. In this study, we develop an easy-to-handle and fast digitization program on the basis of already existing software which allows to speed up digitizing process and to account for all the recoding system uncertainties. Owing to the lack of absolute timing for the historical earthquakes (due to the absence of a universal clock at that time), we used time differences between P and S phases to relocate the earthquakes in North Tien-Shan and the body-wave amplitudes to estimate their magnitudes. Combining our results with geological data, five earthquakes in North Tien-Shan were precisely relocated. The digitizing of records can introduce steps into the seismograms which makes restitution (removal of instrument response) undesirable. To avoid the restitution, we simulated historic seismograph recordings with given values for damping and free period of the respective instrument and compared the amplitude ratios (between P, PP, S and SS) of the real data and the simulated seismograms. At first, the depth and the focal mechanism of the earthquakes were determined based on the amplitude ratios for the point source. Further, on the base of ISOLA software, we developed an application which calculates kinematic source parameters for historical earthquakes without restitution. Based on sub-events approach kinematic source parameters could be determined for a subset of the events. We present the results for five major instrumentally recorded earthquake in North Tien-Shan. The strongest one was the Chon-Kemin earthquake on 3rd January 1911. Its relocated epicenter is 42.98N and 77.33E - 80 kilometer southward from the catalog location. The depth is determined to be 28 km. The obtained focal mechanism shows strike, dip, and slip angles of 44°, 82°,and 56°, respectively. The moment magnitude is calculated to be Mw 8.1. The source time duration is 45 s which gives about 120 km rupture length.
NASA Astrophysics Data System (ADS)
Wilson, B.; Paradise, T. R.
2016-12-01
The influx of millions of Syrian refugees into Turkey has rapidly changed the population distribution along the Dead Sea Rift and East Anatolian Fault zones. In contrast to other countries in the Middle East where refugees are accommodated in camp environments, the majority of displaced individuals in Turkey are integrated into cities, towns, and villages—placing stress on urban settings and increasing potential exposure to strong shaking. Yet, displaced populations are not traditionally captured in data sources used in earthquake risk analysis or loss estimations. Accordingly, we present a district-level analysis assessing the spatial overlap of earthquake hazards and refugee locations in southeastern Turkey to determine how migration patterns are altering seismic risk in the region. Using migration estimates from the U.S. Humanitarian Information Unit, we create three district-level population scenarios that combine official population statistics, refugee camp populations, and low, median, and high bounds for integrated refugee populations. We perform probabilistic seismic hazard analysis alongside these population scenarios to map spatial variations in seismic risk between 2011 and late 2015. Our results show a significant relative southward increase of seismic risk for this period due to refugee migration. Additionally, we calculate earthquake fatalities for simulated earthquakes using a semi-empirical loss estimation technique to determine degree of under-estimation resulting from forgoing migration data in loss modeling. We find that including refugee populations increased casualties by 11-12% using median population estimates, and upwards of 20% using high population estimates. These results communicate the ongoing importance of placing environmental hazards in their appropriate regional and temporal context which unites physical, political, cultural, and socio-economic landscapes. Keywords: Earthquakes, Hazards, Loss-Estimation, Syrian Crisis, Migration, Refugees
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.
2015-12-01
Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being adapted to ground motion prediction experiments.
Uncertainties in evaluation of hazard and seismic risk
NASA Astrophysics Data System (ADS)
Marmureanu, Gheorghe; Marmureanu, Alexandru; Ortanza Cioflan, Carmen; Manea, Elena-Florinela
2015-04-01
Two methods are commonly used for seismic hazard assessment: probabilistic (PSHA) and deterministic(DSHA) seismic hazard analysis.Selection of a ground motion for engineering design requires a clear understanding of seismic hazard and risk among stakeholders, seismologists and engineers. What is wrong with traditional PSHA or DSHA ? PSHA common used in engineering is using four assumptions developed by Cornell in 1968:(1)-Constant-in-time average occurrence rate of earthquakes; (2)-Single point source; (3).Variability of ground motion at a site is independent;(4)-Poisson(or "memory - less") behavior of earthquake occurrences. It is a probabilistic method and "when the causality dies, its place is taken by probability, prestigious term meant to define the inability of us to predict the course of nature"(Nils Bohr). DSHA method was used for the original design of Fukushima Daichii, but Japanese authorities moved to probabilistic assessment methods and the probability of exceeding of the design basis acceleration was expected to be 10-4-10-6 . It was exceeded and it was a violation of the principles of deterministic hazard analysis (ignoring historical events)(Klügel,J,U, EGU,2014, ISSO). PSHA was developed from mathematical statistics and is not based on earthquake science(invalid physical models- point source and Poisson distribution; invalid mathematics; misinterpretation of annual probability of exceeding or return period etc.) and become a pure numerical "creation" (Wang, PAGEOPH.168(2011),11-25). An uncertainty which is a key component for seismic hazard assessment including both PSHA and DSHA is the ground motion attenuation relationship or the so-called ground motion prediction equation (GMPE) which describes a relationship between a ground motion parameter (i.e., PGA,MMI etc.), earthquake magnitude M, source to site distance R, and an uncertainty. So far, no one is taking into consideration strong nonlinear behavior of soils during of strong earthquakes. But, how many cities, villages, metropolitan areas etc. in seismic regions are constructed on rock? Most of them are located on soil deposits? A soil is of basic type sand or gravel (termed coarse soils), silt or clay (termed fine soils) etc. The effect on nonlinearity is very large. For example, if we maintain the same spectral amplification factor (SAF=5.8942) as for relatively strong earthquake on May 3,1990(MW=6.4),then at Bacǎu seismic station for Vrancea earthquake on May 30,1990 (MW =6.9) the peak acceleration has to be a*max =0.154g and the actual recorded was only, amax =0.135g(-14.16%). Also, for Vrancea earthquake on August 30,1986(MW=7.1),the peak acceleration has to be a*max = 0.107g instead of real value recorded of 0.0736 g(- 45.57%). There are many data for more than 60 seismic stations. There is a strong nonlinear dependence of SAF with earthquake magnitude in each site. The authors are coming with an alternative approach called "real spectral amplification factors" instead of GMPE for all extra-Carpathian area where all cities and villages are located on soil deposits. Key words: Probabilistic Seismic Hazard; Uncertainties; Nonlinear seismology; Spectral amplification factors(SAF).
NASA Astrophysics Data System (ADS)
Bossu, R.; Mazet-Roux, G.; Roussel, F.; Frobert, L.
2011-12-01
Rapid characterisation of earthquake effects is essential for a timely and appropriate response in favour of victims and/or of eyewitnesses. In case of damaging earthquakes, any field observations that can fill the information gap characterising their immediate aftermath can contribute to more efficient rescue operations. This paper presents the last developments of a method called "flash-sourcing" addressing these issues. It relies on eyewitnesses, the first informed and the first concerned by an earthquake occurrence. More precisely, their use of the EMSC earthquake information website (www.emsc-csem.org) is analysed in real time to map the area where the earthquake was felt and identify, at least under certain circumstances zones of widespread damage. The approach is based on the natural and immediate convergence of eyewitnesses on the website who rush to the Internet to investigate cause of the shaking they just felt causing our traffic to increase The area where an earthquake was felt is mapped simply by locating Internet Protocol (IP) addresses during traffic surges. In addition, the presence of eyewitnesses browsing our website within minutes of an earthquake occurrence excludes the possibility of widespread damage in the localities they originate from: in case of severe damage, the networks would be down. The validity of the information derived from this clickstream analysis is confirmed by comparisons with EMS98 macroseismic map obtained from online questionnaires. The name of this approach, "flash-sourcing", is a combination of "flash-crowd" and "crowdsourcing" intending to reflect the rapidity of the data collation from the public. For computer scientists, a flash-crowd names a traffic surge on a website. Crowdsourcing means work being done by a "crowd" of people; It also characterises Internet and mobile applications collecting information from the public such as online macroseismic questionnaires. Like crowdsourcing techniques, flash-sourcing is a crowd-to-agency system, but unlike them it is not based on declarative information (e.g. answers to a questionnaire) but on implicit data, clickstream observed on our website. We present first the main improvements of the method, improved detection of traffic surges, and a way to instantly map areas affected by severe damage or network disruptions. The second part describes how the derived information improves and fastens public earthquake information and, beyond seismology, what it can teach us on public behaviour when facing an earthquake. Finally, the discussion will focus on the future evolutions and how flash-sourcing could ultimately improve earthquake response.
Zhang, R.R.; Ma, S.; Hartzell, S.
2003-01-01
In this article we use empirical mode decomposition (EMD) to characterize the 1994 Northridge, California, earthquake records and investigate the signatures carried over from the source rupture process. Comparison of the current study results with existing source inverse solutions that use traditional data processing suggests that the EMD-based characterization contains information that sheds light on aspects of the earthquake rupture process. We first summarize the fundamentals of the EMD and illustrate its features through the analysis of a hypothetical and a real record. Typically, the Northridge strong-motion records are decomposed into eight or nine intrinsic mode functions (IMF's), each of which emphasizes a different oscillation mode with different amplitude and frequency content. The first IMF has the highest-frequency content; frequency content decreases with an increase in IMF component. With the aid of a finite-fault inversion method, we then examine aspects of the source of the 1994 Northridge earthquake that are reflected in the second to fifth IMF components. This study shows that the second IMF is predominantly wave motion generated near the hypocenter, with high-frequency content that might be related to a large stress drop associated with the initiation of the earthquake. As one progresses from the second to the fifth IMF component, there is a general migration of the source region away from the hypocenter with associated longer-period signals as the rupture propagates. This study suggests that the different IMF components carry information on the earthquake rupture process that is expressed in their different frequency bands.
Statistical physics approach to earthquake occurrence and forecasting
NASA Astrophysics Data System (ADS)
de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio
2016-04-01
There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different levels of prediction. In this review we also briefly discuss how the statistical mechanics approach can be applied to non-tectonic earthquakes and to other natural stochastic processes, such as volcanic eruptions and solar flares.
Physics of Earthquake Disaster: From Crustal Rupture to Building Collapse
NASA Astrophysics Data System (ADS)
Uenishi, Koji
2018-05-01
Earthquakes of relatively greater magnitude may cause serious, sometimes unexpected failures of natural and human-made structures, either on the surface, underground, or even at sea. In this review, by treating several examples of extraordinary earthquake-related failures that range from the collapse of every second building in a commune to the initiation of spontaneous crustal rupture at depth, we consider the physical background behind the apparently abnormal earthquake disaster. Simple but rigorous dynamic analyses reveal that such seemingly unusual failures actually occurred for obvious reasons, which may remain unrecognized in part because in conventional seismic analyses only kinematic aspects of the effects of lower-frequency seismic waves below 1 Hz are normally considered. Instead of kinematics, some dynamic approach that takes into account the influence of higher-frequency components of waves over 1 Hz will be needed to anticipate and explain such extraordinary phenomena and mitigate the impact of earthquake disaster in the future.
NASA Astrophysics Data System (ADS)
Wang, Ruijia; Gu, Yu Jeffrey; Schultz, Ryan; Zhang, Miao; Kim, Ahyi
2017-08-01
On 2016 January 12, an intraplate earthquake with an initial reported local magnitude (ML) of 4.8 shook the town of Fox Creek, Alberta. While there were no reported damages, this earthquake was widely felt by the local residents and suspected to be induced by the nearby hydraulic-fracturing (HF) operations. In this study, we determine the earthquake source parameters using moment tensor inversions, and then detect and locate the associated swarm using a waveform cross-correlation based method. The broad-band seismic recordings from regional arrays suggest a moment magnitude (M) 4.1 for this event, which is the largest in Alberta in the past decade. Similar to other recent M ∼ 3 earthquakes near Fox Creek, the 2016 January 12 earthquake exhibits a dominant strike-slip (strike = 184°) mechanism with limited non-double-couple components (∼22 per cent). This resolved focal mechanism, which is also supported by forward modelling and P-wave first motion analysis, indicates an NE-SW oriented compressional axis consistent with the maximum compressive horizontal stress orientations delineated from borehole breakouts. Further detection analysis on industry-contributed recordings unveils 1108 smaller events within 3 km radius of the epicentre of the main event, showing a close spatial-temporal relation to a nearby HF well. The majority of the detected events are located above the basement, comparable to the injection depth (3.5 km) on the Duvernay shale Formation. The spatial distribution of this earthquake cluster further suggests that (1) the source of the sequence is an N-S-striking fault system and (2) these earthquakes were induced by an HF well close to but different from the well that triggered a previous (January 2015) earthquake swarm. Reactivation of pre-existing, N-S oriented faults analogous to the Pine Creek fault zone, which was reported by earlier studies of active source seismic and aeromagnetic data, are likely responsible for the occurrence of the January 2016 earthquake swarm and other recent events in the Crooked Lake area.
Hough, S.E.; Kanamori, H.
2002-01-01
We analyze the source properties of a sequence of triggered earthquakes that occurred near the Salton Sea in southern California in the immediate aftermath of the M 7.1 Hector Mine earthquake of 16 October 1999. The sequence produced a number of early events that were not initially located by the regional network, including two moderate earthquakes: the first within 30 sec of the P-wave arrival and a second approximately 10 minutes after the mainshock. We use available amplitude and waveform data from these events to estimate magnitudes to be approximately 4.7 and 4.4, respectively, and to obtain crude estimates of their locations. The sequence of small events following the initial M 4.7 earthquake is clustered and suggestive of a local aftershock sequence. Using both broadband TriNet data and analog data from the Southern California Seismic Network (SCSN), we also investigate the spectral characteristics of the M 4.4 event and other triggered earthquakes using empirical Green's function (EGF) analysis. We find that the source spectra of the events are consistent with expectations for tectonic (brittle shear failure) earthquakes, and infer stress drop values of 0.1 to 6 MPa for six M 2.1 to M 4.4 events. The estimated stress drop values are within the range observed for tectonic earthquakes elsewhere. They are relatively low compared to typically observed stress drop values, which is consistent with expectations for faulting in an extensional, high heat flow regime. The results therefore suggest that, at least in this case, triggered earthquakes are associated with a brittle shear failure mechanism. This further suggests that triggered earthquakes may tend to occur in geothermal-volcanic regions because shear failure occurs at, and can be triggered by, relatively low stresses in extensional regimes.
Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting
NASA Technical Reports Server (NTRS)
Bergman, Eric A.; Solomon, Sean C.
1987-01-01
The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.
Earthquake-induced ground failures in Italy from a reviewed database
NASA Astrophysics Data System (ADS)
Martino, S.; Prestininzi, A.; Romeo, R. W.
2014-04-01
A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground changes triggered by earthquakes of Mercalli epicentral intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (http://www.ceri.uniroma1.it/cn/gis.jsp ) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the Sapienza University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Seismic design and engineering research at the U.S. Geological Survey
1988-01-01
The Engineering Seismology Element of the USGS Earthquake Hazards Reduction Program is responsible for the coordination and operation of the National Strong Motion Network to collect, process, and disseminate earthquake strong-motion data; and, the development of improved methodologies to estimate and predict earthquake ground motion. Instrumental observations of strong ground shaking induced by damaging earthquakes and the corresponding response of man-made structures provide the basis for estimating the severity of shaking from future earthquakes, for earthquake-resistant design, and for understanding the physics of seismologic failure in the Earth's crust.
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
NASA Astrophysics Data System (ADS)
Pyle, M. L.; Walter, W. R.
2017-12-01
Discrimination between underground explosions and naturally occurring earthquakes is an important endeavor for global security and test-ban treaty monitoring, and ratios of seismic P to S-wave amplitudes at regional distances have proven to be an effective discriminant. The use of the P/S ratio is rooted in the idea that explosive sources should theoretically only generate compressional energy. While, in practice, shear energy is observed from explosions, generally when corrections are made for magnitude and distance, P/S ratios from explosions are higher than those from surrounding earthquakes. At local distances (< 200 km) that might be needed to detect smaller events, however, this discriminant becomes less reliable. While ratios at some stations still show separation between earthquake and explosion populations, at other stations the populations are indistinguishable. There is no clear distance or azimuthal trend for which stations show discriminating abilities and which do not. A number of factors may play a role in differences we see between regional and local discrimination, including source effects such as depth and radiation pattern, and path effects such as laterally varying attenuation and focusing/defocusing from layers and scattering. We use data from the Source Physics Experiment (SPE) to investigate some of these effects. SPE is a series of chemical explosions at the Nevada National Security Site (NNSS) designed to improve our understanding and modeling capabilities of shear waves generated by explosions. Phase I consisted of 5 explosions in granite and Phase II will move to a contrasting dry alluvium geology. We apply a high-resolution 2D attenuation model to events near the NNSS to examine what effect path plays in local P/S ratios, and how well an earthquake-derived model can account for shallower explosion paths. The model incorporates both intrinsic attenuation and scattering effects and extends to 16 Hz, allowing us to make lateral path corrections and consider high-frequency ratios. Preliminary work suggests that while 2D path corrections modestly improve earthquake amplitude predictions, explosion amplitudes are not well matched, and so P/S ratios do not necessarily improve. Further work is needed to better understand the uses and limitation of 2D path corrections for local P/S ratios.
NASA Astrophysics Data System (ADS)
Cruz, H.; Furumura, T.; Chavez-Garcia, F. J.
2002-12-01
The estimation of scenarios of the strong ground motions caused by future great earthquakes is an important problem in strong motion seismology. This was pointed out by the great 1985 Michoacan earthquake, which caused a great damage in Mexico City, 300 km away from the epicenter. Since the seismic wavefield is characterized by the source, path and site effects, the pattern of strong motion damage from different types of earthquakes should differ significantly. In this study, the scenarios for intermediate-depth normal-faulting, shallow-interplate thrust faulting, and crustal earthquakes have been estimated using a hybrid simulation technique. The character of the seismic wavefield propagating from the source to Mexico City for each earthquake was first calculated using the pseudospectral method for 2D SH waves. The site amplifications in the shallow structure of Mexico City are then calculated using the multiple SH wave reverberation theory. The scenarios of maximum ground motion for both inslab and interplate earthquakes obtained by the simulation show a good agreement with the observations. This indicates the effectiveness of the hybrid simulation approach to investigate the strong motion damage for future earthquakes.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Gomberg, J.; Wolf, L.
1999-01-01
Circumstantial and physical evidence indicates that the 1997 MW 4.9 earthquake in southern Alabama may have been related to hydrocarbon recovery. Epicenters of this earthquake and its aftershocks were located within a few kilometers of active oil and gas extraction wells and two pressurized injection wells. Main shock and aftershock focal depths (2-6 km) are within a few kilometers of the injection and withdrawal depths. Strain accumulation at geologic rates sufficient to cause rupture at these shallow focal depths is not likely. A paucity of prior seismicity is difficult to reconcile with the occurrence of an earthquake of MW 4.9 and a magnitude-frequency relationship usually assumed for natural earthquakes. The normal-fault main-shock mechanism is consistent with reactivation of preexisting faults in the regional tectonic stress field. If the earthquake were purely tectonic, however, the question arises as to why it occurred on only the small fraction of a large, regional fault system coinciding with active hydrocarbon recovery. No obvious temporal correlation is apparent between the earthquakes and recovery activities. Although thus far little can be said quantitatively about the physical processes that may have caused the 1997 sequence, a plausible explanation involves the poroelastic response of the crust to extraction of hydrocarbons.
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-05-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth ( H = 19 km), the seismic moment ( M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism ( φ = 280°, δ = 14°, λ = 84°), the source radius ( a = 1.3 km), and the static stress drop (Δ σ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q( f) = 500 f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δ σ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Kubota, T.; Hino, R.; Inazu, D.; Saito, T.; Iinuma, T.; Suzuki, S.; Ito, Y.; Ohta, Y.; Suzuki, K.
2012-12-01
We estimated source models of small amplitude tsunami associated with M-7 class earthquakes in the rupture area of the 2011 Tohoku-Oki Earthquake using near-field records of tsunami recorded by ocean bottom pressure gauges (OBPs). The largest (Mw=7.3) foreshock of the Tohoku-Oki earthquake, occurred on 9 Mar., two days before the mainshock. Tsunami associated with the foreshock was clearly recorded by seven OBPs, as well as coseismic vertical deformation of the seafloor. Assuming a planer fault along the plate boundary as a source, the OBP records were inverted for slip distribution. As a result, the most of the coseismic slip was found to be concentrated in the area of about 40 x 40 km in size and located to the north-west of the epicenter, suggesting downdip rupture propagation. Seismic moment of our tsunami waveform inversion is 1.4 x 10^20 Nm, equivalent to Mw 7.3. On 2011 July 10th, an earthquake of Mw 7.0 occurred near the hypocenter of the mainshock. Its relatively deep focus and strike-slip focal mechanism indicate that this earthquake was an intraslab earthquake. The earthquake was associated with small amplitude tsunami. By using the OBP records, we estimated a model of the initial sea-surface height distribution. Our tsunami inversion showed that a pair of uplift/subsiding eyeballs was required to explain the observed tsunami waveform. The spatial pattern of the seafloor deformation is consistent with the oblique strike-slip solution obtained by the seismic data analyses. The location and strike of the hinge line separating the uplift and subsidence zones correspond well to the linear distribution of the aftershock determined by using local OBS data (Obana et al., 2012).
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-02-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth (H = 19 km), the seismic moment (M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism (φ = 280°, δ = 14°, λ = 84°), the source radius (a = 1.3 km), and the static stress drop (Δσ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q(f) = 500f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δσ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Oda, Takuma; Nakamura, Mamoru
2017-09-01
We estimated the location and magnitude of earthquakes constituting the 1858 earthquake swarm in the central Ryukyu Islands using the felt earthquakes recorded by Father Louis Furet who lived in Naha, Okinawa Island, in the middle of the nineteenth century. First, we estimated the JMA seismic intensity of the earthquakes by interpreting the words used to describe the shaking. Next, using the seismic intensity and shaking duration of the felt earthquakes, we estimated the epicentral distance and magnitude range of three earthquakes in the swarm. The results showed that the epicentral distances of the earthquakes were 20-250 km and that magnitudes ranged between 4.5 and 6.5, with a strong correlation between epicentral distance and magnitude. Since the rumblings accompanying some earthquakes in the swarm were heard from a northward direction, the swarm probably occurred to the north of Naha. The most likely source area for the 1858 swarm is the central Okinawa Trough, where a similar swarm event occurred in 1980. If the 1858 swarm occurred in the central Okinawa Trough, the estimated maximum magnitude would have reached 6-7. In contrast, if the 1858 swarm occurred in the vicinity of Amami Island, which is the second most likely candidate area, it would have produced a cluster of magnitude 7-8 earthquakes.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia
2017-04-01
The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.
Low-frequency source parameters of twelve large earthquakes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Harabaglia, Paolo
1993-01-01
A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.
A New Network-Based Approach for the Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.
2017-12-01
Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of evets.
Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario
2018-02-01
Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.
Seismotectonic Map of Afghanistan and Adjacent Areas
Wheeler, Russell L.; Rukstales, Kenneth S.
2007-01-01
Introduction This map is part of an assessment of Afghanistan's geology, natural resources, and natural hazards. One of the natural hazards is from earthquake shaking. One of the tools required to address the shaking hazard is a probabilistic seismic-hazard map, which was made separately. The information on this seismotectonic map has been used in the design and computation of the hazard map. A seismotectonic map like this one shows geological, seismological, and other information that previously had been scattered among many sources. The compilation can show spatial relations that might not have been seen by comparing the original sources, and it can suggest hypotheses that might not have occurred to persons who studied those scattered sources. The main map shows faults and earthquakes of Afghanistan. Plate convergence drives the deformations that cause the earthquakes. Accordingly, smaller maps and text explain the modern plate-tectonic setting of Afghanistan and its evolution, and relate both to patterns of faults and earthquakes.
Keefer, D.K.
2000-01-01
The 1989 Loma Prieta, California earthquake (moment magnitude, M=6.9) generated landslides throughout an area of about 15,000 km2 in central California. Most of these landslides occurred in an area of about 2000 km2 in the mountainous terrain around the epicenter, where they were mapped during field investigations immediately following the earthquake. The distribution of these landslides is investigated statistically, using regression and one-way analysisof variance (ANOVA) techniques to determine how the occurrence of landslides correlates with distance from the earthquake source, slope steepness, and rock type. The landslide concentration (defined as the number of landslide sources per unit area) has a strong inverse correlation with distance from the earthquake source and a strong positive correlation with slope steepness. The landslide concentration differs substantially among the various geologic units in the area. The differences correlate to some degree with differences in lithology and degree of induration, but this correlation is less clear, suggesting a more complex relationship between landslide occurrence and rock properties. ?? 2000 Elsevier Science B.V. All rights reserved.
The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry
NASA Astrophysics Data System (ADS)
Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang
2018-05-01
Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.
Landry, Michel D; Sheppard, Phillip S; Leung, Kit; Retis, Chiara; Salvador, Edwin C; Raman, Sudha R
2016-11-01
The frequency of natural disasters appears to be mounting at an alarming rate, and the degree to which people are surviving such traumatic events also is increasing. Postdisaster survival often triggers increases in population and individual disability-related outcomes in the form of impairments, activity limitations, and participation restrictions, all of which have an important impact on the individual, his or her family, and their community. The increase in postdisaster disability-related outcomes has provided a rationale for the increased role of the disability and rehabilitation sector's involvement in emergency response, including physical therapists. A recent major earthquake that has drawn the world's attention occurred in the spring of 2015 in Nepal. The response of the local and international communities was large and significant, and although the collection of complex health and disability issues have yet to be fully resolved, there has been a series of important lessons learned from the 2015 Nepal earthquake(s). This perspective article outlines lessons learned from Nepal that can be applied to future disasters to reduce overall disability-related outcomes and more fully integrate rehabilitation in preparation and planning. First, information is presented on disasters in general, and then information is presented that focuses on the earthquake(s) in Nepal. Next, field experience in Nepal before, during, and after the earthquake is described, and actions that can and should be adopted prior to disasters as part of disability preparedness planning are examined. Then, the emerging roles of rehabilitation providers such as physical therapists during the immediate and postdisaster recovery phases are discussed. Finally, approaches are suggested that can be adopted to "build back better" for, and with, people with disabilities in postdisaster settings such as Nepal. © 2016 American Physical Therapy Association.
Seismic Source Scaling and Discrimination in Diverse Tectonic Environments
2009-09-30
3349-3352. Imanishi, K., W. L. Ellsworth, and S. G. Prejean (2004). Earthquake source parameters determined by the SAFOD Pilot Hole seismic array ... seismic discrimination by performing a thorough investigation of* earthquake source scaling using diverse, high-quality datascts from varied tectonic...these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity
The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Vassiliou, M. S.
1983-01-01
Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.
The underground seismic array of Gran Sasso (UNDERSEIS), central Italy
NASA Astrophysics Data System (ADS)
Scarpa, R.; Muscente, R.; Tronca, F.; Fischione, C.; Rotella, P.; Abril, M.; Alguacil, G.; Martini, M.; de Cesare, W.
2003-04-01
Since early May, 2002, a small aperture seismic array has been installed in the underground Physics Laboratories of Gran Sasso, located near seismic active faults of central Apennines, Italy. This array is presently composed by 21 three-component short period seismic stations (Mark L4C-3D), with average distance 90 m and semi-circular aperture of 400 m x 600 m. It is intersecting a main seismogenic fault where the presence of slow earthquakes has been recently detected through two wide band geodetic laser interferometers. The underground Laboratories are shielded by a limestone rock layer having 1400 m thickness. Each seismometer is linked, through a 24 bits A/D board, to a set of 6 industrial PC via a serial RS-485 standard. The six PC transmit data to a server through an ethernet network. Time syncronization is provided by a Master Oscillator controlled by an atomic clock. Earthworm package is used for data selection and transmission. High quality data have been recorded since May 2002, including local and regional earthquakes. In particular the 31 October, 2002, Molise (Mw=5.8 earthquake) and its aftershocks have been recorded at this array. Array techniques such as polarisation and frequency-slowness analyses with the MUSIC noise algorithm indicate the high performance of this array, as compared to the national seismic network, for identifying the basic source parameters for earthquakes located at distance of few hundreds of km.
Seismic hazard analysis for Jayapura city, Papua
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robiana, R., E-mail: robiana-geo104@yahoo.com; Cipta, A.
Jayapura city had destructive earthquake which occurred on June 25, 1976 with the maximum intensity VII MMI scale. Probabilistic methods are used to determine the earthquake hazard by considering all possible earthquakes that can occur in this region. Earthquake source models using three types of source models are subduction model; comes from the New Guinea Trench subduction zone (North Papuan Thrust), fault models; derived from fault Yapen, TareraAiduna, Wamena, Memberamo, Waipago, Jayapura, and Jayawijaya, and 7 background models to accommodate unknown earthquakes. Amplification factor using geomorphological approaches are corrected by the measurement data. This data is related to rock typemore » and depth of soft soil. Site class in Jayapura city can be grouped into classes B, C, D and E, with the amplification between 0.5 – 6. Hazard maps are presented with a 10% probability of earthquake occurrence within a period of 500 years for the dominant periods of 0.0, 0.2, and 1.0 seconds.« less
Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform
NASA Astrophysics Data System (ADS)
Wang, Y.; Ni, S.; Chen, W.
2012-12-01
Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.
New Field Observations About 19 August 1966 Varto earthquake, Eastern Turkey
NASA Astrophysics Data System (ADS)
Gurboga, S.
2013-12-01
Some destructive earthquakes in the past and even in the recent have several mysteries. For example, magnitude, epicenter location, faulting type and source fault of an earthquake have not been detected yet. One of these mysteries events is 19 August 1966 Varto earthquake in Turkey. 19 August 1966 Varto earthquake (Ms = 6.8) was an extra ordinary event at the 40 km east of junction between NAFS and EAFS which are two seismogenic system and active structures shaping the tectonics of Turkey. This earthquake sourced from Varto fault zone which are approximately 4 km width and 43 km length. It consists of faults which have parallel to sub-parallel, closely-spaced, north and south-dipping up to 85°-88° dip amount. Although this event has 6.8 (Ms) magnitude that is big enough to create a surface rupture, there was no clear surface deformation had been detected. This creates the controversial issue about the source fault and the mechanism of the earthquake. According to Wallace (1968) the type of faulting is right-lateral. On the other hand, McKenzie (1972) proposed right-lateral movement with thrust component by using the focal mechanism solution. The recent work done by Sançar et al. (2011) claimed that type of faulting is pure right-lateral strike-slip and there is no any surface rupture during the earthquake. Furthermore, they suggested that Varto segment in the Varto Fault Zone was most probably not broken in 1966 earthquake. This study is purely focused on the field geology and trenching survey for the investigation of 1966 Varto earthquake. Four fault segments have been mapped along the Varto fault zone: Varto, Sazlica, Leylekdağ and Çayçati segments. Because of the thick volcanic cover on the area around Varto, surface rupture has only been detected by trenching survey. Two trenching survey have been applied along the Yayikli and Ağaçalti faults in the Varto fault zone. Consequently, detailed geological work in the field and trenching survey indicate that a) source of 1966 earthquake is Varto segment in Varto Fault Zone, b) many of the surface deformations observed just after the earthquake is lateral-spreading and small landslides, c) surface rupture was created with 10 cm displacement at the surface with thrust component. Because of the volcanic cover and activation of many faults, ground surface rupture could not be seen clearly which has been expected after 6.8 magnitude earthquake, d) faulting type is right-lateral component with thrust component. Keywords: 1966 Varto earthquake, paleoseismology, right-lateral fault with thrust component.
Toward real-time regional earthquake simulation of Taiwan earthquakes
NASA Astrophysics Data System (ADS)
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Hirata, K.; Takahashi, H.; Geist, E.; Satake, K.; Tanioka, Y.; Sugioka, H.; Mikada, H.
2003-01-01
Micro-tsunami waves with a maximum amplitude of 4-6 mm were detected with the ocean-bottom pressure gauges on a cabled deep seafloor observatory south of Hokkaido, Japan, following the January 28, 2000 earthquake (Mw 6.8) in the southern Kuril subduction zone. We model the observed micro-tsunami and estimate the focal depth and other source parameters such as fault length and amount of slip using grid searching with the least-squares method. The source depth and stress drop for the January 2000 earthquake are estimated to be 50 km and 7 MPa, respectively, with possible ranges of 45-55 km and 4-13 MPa. Focal depth of typical inter-plate earthquakes in this region ranges from 10 to 20 km and stress drop of inter-plate earthquakes generally is around 3 MPa. The source depth and stress drop estimates suggest that the earthquake was an intra-slab event in the subducting Pacific plate, rather than an inter-plate event. In addition, for a prescribed fault width of 30 km, the fault length is estimated to be 15 km, with possible ranges of 10-20 km, which is the same as the previously determined aftershock distribution. The corresponding estimate for seismic moment is 2.7x1019 Nm with possible ranges of 2.3x1019-3.2x1019Nm. Standard tide gauges along the nearby coast did not record any tsunami signal. High-precision ocean-bottom pressure measurements offshore thus make it possible to determine fault parameters of moderate-sized earthquakes in subduction zones using open-ocean tsunami waveforms. Published by Elsevier Science B. V.
Complex earthquake rupture and local tsunamis
Geist, E.L.
2002-01-01
In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.
NASA Astrophysics Data System (ADS)
Toni, Mostafa; Barth, Andreas; Ali, Sherif M.; Wenzel, Friedemann
2016-09-01
On 22 January 2013 an earthquake with local magnitude ML 4.1 occurred in the central part of the Gulf of Suez. Six months later on 1 June 2013 another earthquake with local magnitude ML 5.1 took place at the same epicenter and different depths. These two perceptible events were recorded and localized by the Egyptian National Seismological Network (ENSN) and additional networks in the region. The purpose of this study is to determine focal mechanisms and source parameters of both earthquakes to analyze their tectonic relation. We determine the focal mechanisms by applying moment tensor inversion and first motion analysis of P- and S-waves. Both sources reveal oblique focal mechanisms with normal faulting and strike-slip components on differently oriented faults. The source mechanism of the larger event on 1 June in combination with the location of aftershock sequence indicates a left-lateral slip on N-S striking fault structure in 21 km depth that is in conformity with the NE-SW extensional Shmin (orientation of minimum horizontal compressional stress) and the local fault pattern. On the other hand, the smaller earthquake on 22 January with a shallower hypocenter in 16 km depth seems to have happened on a NE-SW striking fault plane sub-parallel to Shmin. Thus, here an energy release on a transfer fault connecting dominant rift-parallel structures might have resulted in a stress transfer, triggering the later ML 5.1 earthquake. Following Brune's model and using displacement spectra, we calculate the dynamic source parameters for the two events. The estimated source parameters for the 22 January 2013 and 1 June 2013 earthquakes are fault length (470 and 830 m), stress drop (1.40 and 2.13 MPa), and seismic moment (5.47E+21 and 6.30E+22 dyn cm) corresponding to moment magnitudes of MW 3.8 and 4.6, respectively.
NASA Astrophysics Data System (ADS)
Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.
2017-04-01
We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Nonlinear waves in earth crust faults: application to regular and slow earthquakes
NASA Astrophysics Data System (ADS)
Gershenzon, Naum; Bambakidis, Gust
2015-04-01
The genesis, development and cessation of regular earthquakes continue to be major problems of modern geophysics. How are earthquakes initiated? What factors determine the rapture velocity, slip velocity, rise time and geometry of rupture? How do accumulated stresses relax after the main shock? These and other questions still need to be answered. In addition, slow slip events have attracted much attention as an additional source for monitoring fault dynamics. Recently discovered phenomena such as deep non-volcanic tremor (NVT), low frequency earthquakes (LFE), very low frequency earthquakes (VLF), and episodic tremor and slip (ETS) have enhanced and complemented our knowledge of fault dynamic. At the same time, these phenomena give rise to new questions about their genesis, properties and relation to regular earthquakes. We have developed a model of macroscopic dry friction which efficiently describes laboratory frictional experiments [1], basic properties of regular earthquakes including post-seismic stress relaxation [3], the occurrence of ambient and triggered NVT [4], and ETS events [5, 6]. Here we will discuss the basics of the model and its geophysical applications. References [1] Gershenzon N.I. & G. Bambakidis (2013) Tribology International, 61, 11-18, http://dx.doi.org/10.1016/j.triboint.2012.11.025 [2] Gershenzon, N.I., G. Bambakidis and T. Skinner (2014) Lubricants 2014, 2, 1-x manuscripts; doi:10.3390/lubricants20x000x; arXiv:1411.1030v2 [3] Gershenzon N.I., Bykov V. G. and Bambakidis G., (2009) Physical Review E 79, 056601 [4] Gershenzon, N. I, G. Bambakidis, (2014a), Bull. Seismol. Soc. Am., 104, 4, doi: 10.1785/0120130234 [5] Gershenzon, N. I.,G. Bambakidis, E. Hauser, A. Ghosh, and K. C. Creager (2011), Geophys. Res. Lett., 38, L01309, doi:10.1029/2010GL045225. [6] Gershenzon, N.I. and G. Bambakidis (2014) Bull. Seismol. Soc. Am., (in press); arXiv:1411.1020
Discovery of non-volcanic tremor and contribution to earth science by NIED Hi-net
NASA Astrophysics Data System (ADS)
Obara, K.
2015-12-01
Progress of seismic observation network brings breakthroughs in the earth science at each era. High sensitivity seismograph network (Hi-net) was constructed by National Research Institute for Earth Science and Disaster Prevention (NIED) as a national project in order to improve the detection capability of microearthquake after disastrous 1995 Kobe earthquake. Hi-net has been contributing to not only monitoring of seismicity but also producing many research results like as discoveries of non-volcanic tremor and other slow earthquakes. More important thing is that we have continued to make efforts to monitor all of data visually and effectively. The discovery of tremor in southwest Japan stimulated PGC researchers to search similar seismic signature in Cascadia because of a couple of common features in the tremor in Japan and slow slip event (SSE) they already discovered in Cascadia. At last, episodic tremor and slip (ETS) was discovered, then the SSE associated with tremor was also detected in Japan by using the tilting data measured by high-sensitivity accelerometer attached with the Hi-net. This coupling phenomena strengthened the connection between seismology and geodesy. Widely separated spectrum of tremor and SSE motivated us to search intervened phenomena, then we found very low frequency earthquake during ETS episode. These slow earthquakes obey a scaling law different from ordinary earthquake. This difference is very important to resolve the earthquake physics. Hi-net is quite useful for not only three-dimensional imaging of underground structure beneath the Japan Islands, but also resolving deep Earth interior by using teleseismic events or ambient noises and source rupture process of large earthquakes by using back-projection analysis as a remote array. Hi-net will continue to supply unexpected new discoveries. I expect that multiple installation of similar dense seismic array in the world will give us great opportunity to discover more important and explore a new regime in the earth science.
Sedimentary earthquake records in the İzmit Gulf, Sea of Marmara, Turkey
NASA Astrophysics Data System (ADS)
Çağatay, M. N.; Erel, L.; Bellucci, L. G.; Polonia, A.; Gasperini, L.; Eriş, K. K.; Sancar, Ü.; Biltekin, D.; Uçarkuş, G.; Ülgen, U. B.; Damcı, E.
2012-12-01
Sedimentary earthquake records of the last 2400 a, including that of the devastating 17 August 1999 İzmit earthquake (Mw = 7.4), were studied in cores from the 210 m-deep central Karamürsel Basin of the İzmit Gulf in the eastern Sea of Marmara, using laser grain-size, physical properties, stable O and C isotopes and XRF Core Scanner analyses, and dated by radionuclide and radiocarbon methods. The earthquake records are represented by turbidite-homogenite mass-flow units (THU) that commonly contain a basal coarse layer, a middle laminated silt layer and an overlying homogeneous mud layer. The coarse basal part has a sharp and sometimes scoured lower boundary, and includes multiple coarse (sand/silt) layers or laminae showing normal size grading. Multiple coarse layers and occasional bi-directional cross-bedding suggest deposition from a bed-load during water column oscillations, or seiche effect. The grain-size characteristics of the overlaying laminated silt and the homogeneous mud units indicate deposition from weak oscillating currents and homogeneous suspension, respectively. High Mn value just below the base of THUs suggests diagenetic enrichment at oxic/anoxic redox boundary before the mass-flow event. Sharp decrease in Mn with very low values within the THUs suggests transient redox conditions following the mass-flow. Variable geochemical compositions of the basal coarse layers indicate different sediment sources for different THUs. Eight sedimentary earthquake records observed in the last 2400 a in the İzmit Gulf can be confidently correlated with the historical earthquakes of 1999, 1509 AD (Ms = 7.2), 1296 AD (I = VII), 865 AD (I = VIII), 740 AD (I = VIII), 268 AD (I = VIII), 358 AD (I = IX), and 427 BC. This gives an earthquake recurrence time of ca. 300 a, with the interval between consecutive events ranging from 90 to 695 a.
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Pitarka, A.; Pyle, M. L.; Pasyanos, M.; Mellors, R. J.; Dodge, D. A.
2017-12-01
The relative amplitudes of seismic P-waves to S-waves are effective at identifying underground explosions among a background of natural earthquakes. These P/S methods appear to work best at frequencies above 2 Hz and at regional distances ( >200 km). We illustrate this with a variety of historic nuclear explosion data as well as with the recent DPRK nuclear tests. However, the physical basis for the generation of explosion S-waves, and therefore the predictability of this P/S technique as a function of path, frequency and event properties such as size, depth, and geology, remains incompletely understood. A goal of current research, such as the Source Physics Experiments (SPE), is to improve our physical understanding of the mechanisms of explosion S-wave generation and advance our ability to numerically model and predict them. The SPE conducted six chemical explosions between 2011 and 2016 in the same borehole in granite in southern Nevada. The explosions were at a variety of depths and sizes, ranging from 0.1 to 5 tons TNT equivalent yield. The largest were observed at near regional distances, with P/S ratios comparable to much larger historic nuclear tests. If we control for material property effects, the explosions have very similar P/S ratios independent of yield or magnitude. These results are consistent with explosion S-waves coming mainly from conversion of P- and surface waves, and are inconsistent with source-size based models. A dense sensor deployment for the largest SPE explosion allowed this conversion to be mapped in detail. This is good news for P/S explosion identification, which can work well for very small explosions and may be ultimately limited by S-wave detection thresholds. The SPE also showed explosion P-wave source models need to be updated for small and/or deeply buried cases. We are developing new P- and S-wave explosion models that better match all the empirical data. Historic nuclear explosion seismic data shows that the media in which the explosion takes place is quite important. These material property effects can surprisingly degrade the seismic waveform correlation of even closely spaced explosions in different media. The next phase of the SPE will contrast chemical explosions in dry alluvium with the prior SPE explosions in granite and historic nuclear tests in a variety of media.
SENSITIVITY OF STRUCTURAL RESPONSE TO GROUND MOTION SOURCE AND SITE PARAMETERS.
Safak, Erdal; Brebbia, C.A.; Cakmak, A.S.; Abdel Ghaffar, A.M.
1985-01-01
Designing structures to withstand earthquakes requires an accurate estimation of the expected ground motion. While engineers use the peak ground acceleration (PGA) to model the strong ground motion, seismologists use physical characteristics of the source and the rupture mechanism, such as fault length, stress drop, shear wave velocity, seismic moment, distance, and attenuation. This study presents a method for calculating response spectra from seismological models using random vibration theory. It then investigates the effect of various source and site parameters on peak response. Calculations are based on a nonstationary stochastic ground motion model, which can incorporate all the parameters both in frequency and time domains. The estimation of the peak response accounts for the effects of the non-stationarity, bandwidth and peak correlations of the response.
Infrasound associated with the deep M 7.3 northeastern China earthquake of June 28, 2002
NASA Astrophysics Data System (ADS)
Che, Il-Young; Kim, Geunyoung; Le Pichon, Alexis
2013-02-01
On 28 June, 2002, a deep-focus (566 km) earthquake with a moment magnitude of 7.3 occurred in the China-Russia-North Korea border region. Despite its deep focus, the earthquake produced an infrasound signal that was observed by the remote infrasound array (CHNAR), 682 km from the epicenter, in South Korea. Coherent infrasound signals were detected sequentially at the receiver, with different arrival times and azimuths indicating that the signals were generated both near the epicenter and elsewhere. On the basis of the azimuth, arrival time measurements, and atmospheric ray simulation results, the source area of the infrasonic signals that arrived earlier were located along the eastern coastal areas of North Korea and Russia, whereas later signals were sourced throughout Japan. The geographically-constrained, and discrete, distribution of the sources identified is explained by infrasound propagation effects caused by a westward zonal wind that was active when the event occurred. The amplitude of the deep quake's signal was equivalent to that of a shallow earthquake with a magnitude of approximately 5. This study expands the breadth of seismically-associated infrasound to include deep earthquakes, and also supports the possibility that infrasound measurements could help determine the depth of earthquakes.
Modeling potential tsunami sources for deposits near Unalaska Island, Aleutian Islands
NASA Astrophysics Data System (ADS)
La Selle, S.; Gelfenbaum, G. R.
2013-12-01
In regions with little seismic data and short historical records of earthquakes, we can use preserved tsunami deposits and tsunami modeling to infer if, when and where tsunamigenic earthquakes have occurred. The Aleutian-Alaska subduction zone in the region offshore of Unalaska Island is one such region where the historical and paleo-seismicity is poorly understood. This section of the subduction zone is not thought to have ruptured historically in a large earthquake, leading some to designate the region as a seismic gap. By modeling various historical and synthetic earthquake sources, we investigate whether or not tsunamis that left deposits near Unalaska Island were generated by earthquakes rupturing through Unalaska Gap. Preliminary field investigations near the eastern end of Unalaska Island have identified paleotsunami deposits well above sea level, suggesting that multiple tsunamis in the last 5,000 years have flooded low-lying areas over 1 km inland. Other indicators of tsunami inundation, such as a breached cobble beach berm and driftwood logs stranded far inland, were tentatively attributed to the March 9, 1957 tsunami, which had reported runup of 13 to 22 meters on Umnak and Unimak Islands, to the west and east of Unalaska. In order to determine if tsunami inundation could have reached the runup markers observed on Unalaska, we modeled the 1957 tsunami using GeoCLAW, a numerical model that simulates tsunami generation, propagation, and inundation. The published rupture orientation and slip distribution for the MW 8.6, 1957 earthquake (Johnson et al., 1994) was used as the tsunami source, which delineates a 1200 km long rupture zone along the Aleutian trench from Delarof Island to Unimak Island. Model results indicate that runup and inundation from this particular source are too low to account for the runup markers observed in the field, because slip is concentrated in the western half of the rupture zone, far from Unalaska. To ascertain if any realistic, earthquake-generated tsunami could account for the observed runup, we modeled tsunami inundation from synthetic MW 9.2 earthquakes rupturing along the trench between Atka and Unimak Islands, which indicate that the deposit runup observed on Unalaska is possible from a source of this size and orientation. Further modeling efforts will examine the April 1, 1946 Aleutian tsunami, as well as other synthetic tsunamigenic earthquake sources of varying size and location, which may provide insight into the rupture history of the Aleutian-Alaska subduction zone, especially in combination with more data from paleotsunami deposits. Johnson, Jean M., Tanioka, Yuichiro, Ruff, Larry J., Satake, Kenji, Kanamori, Hiroo, Sykes, Lynn R. "The 1957 great Aleutian earthquake." Pure and Applied Geophysics 142.1 (1994): 3-28.
Seismo-Acoustic Generation by Earthquakes and Explosions and Near-Regional Propagation
2009-09-30
earthquakes generate infrasound . Three infrasonic arrays in Utah (BGU, EPU, and NOQ), one in Nevada (NVIAR), and one in Wyoming (PDIAR) recorded...Katz, and C. Hayward (2009b). The F-detector Revisited: An Improved Strategy for Signal Detection at Seismic and Infrasound Arrays , Bull. Seism. Soc...sources. RESEARCH ACCOMPLISHED Infrasound Observations of the Wells Earthquake Most studies documenting earthquake - generated infrasound are based
NASA Astrophysics Data System (ADS)
Necmioglu, O.; Meral Ozel, N.
2014-12-01
Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.
Implications of ground water chemistry and flow patterns for earthquake studies.
Guangcai, Wang; Zuochen, Zhang; Min, Wang; Cravotta, Charles A; Chenglong, Liu
2005-01-01
Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for delta18O, deltaD, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57 degrees C to 160 degrees C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas.
Implications of ground water chemistry and flow patterns for earthquake studies
Guangcai, W.; Zuochen, Z.; Min, W.; Cravotta, C.A.; Chenglong, L.
2005-01-01
Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for ??18O, ??D, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57??C to 160??C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas. Copyright ?? 2005 National Ground Water Association.
NASA Technical Reports Server (NTRS)
Pulinets, S.; Ouzounov, D.
2010-01-01
The paper presents a conception of complex multidisciplinary approach to the problem of clarification the nature of short-term earthquake precursors observed in atmosphere, atmospheric electricity and in ionosphere and magnetosphere. Our approach is based on the most fundamental principles of tectonics giving understanding that earthquake is an ultimate result of relative movement of tectonic plates and blocks of different sizes. Different kind of gases: methane, helium, hydrogen, and carbon dioxide leaking from the crust can serve as carrier gases for radon including underwater seismically active faults. Radon action on atmospheric gases is similar to the cosmic rays effects in upper layers of atmosphere: it is the air ionization and formation by ions the nucleus of water condensation. Condensation of water vapor is accompanied by the latent heat exhalation is the main cause for observing atmospheric thermal anomalies. Formation of large ion clusters changes the conductivity of boundary layer of atmosphere and parameters of the global electric circuit over the active tectonic faults. Variations of atmospheric electricity are the main source of ionospheric anomalies over seismically active areas. Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model can explain most of these events as a synergy between different ground surface, atmosphere and ionosphere processes and anomalous variations which are usually named as short-term earthquake precursors. A newly developed approach of Interdisciplinary Space-Terrestrial Framework (ISTF) can provide also a verification of these precursory processes in seismically active regions. The main outcome of this paper is the unified concept for systematic validation of different types of earthquake precursors united by physical basis in one common theory.
NASA Astrophysics Data System (ADS)
Kumar, A.; Mitra, S.; Suresh, G.
2014-12-01
The Eastern Himalayan System (east of 88°E) is distinct from the rest of the India-Eurasia continental collision, due to a wider zone of distributed deformation, oblique convergence across two orthogonal plate boundaries and near absence of foreland basin sedimentary strata. To understand the seismotectonics of this region we study the spatial distribution and source mechanism of earthquakes originating within Eastern Himalaya, northeast India and Indo-Burman Convergence Zone (IBCZ). We compute focal mechanism of 32 moderate-to-large earthquakes (mb >=5.4) by modeling teleseismic P- and SH-waveforms, from GDSN stations, using least-squares inversion algorithm; and 7 small-to-moderate earthquakes (3.5<= mb <5.4) by modeling local P- and S-waveforms, from the NorthEast India Telemetered Network, using non-linear grid search algorithm. We also include source mechanisms from previous studies, either computed by waveform inversion or by first motion polarity from analog data. Depth distribution of modeled earthquakes reveal that the seismogenic layer beneath northeast India is ~45km thick. From source mechanisms we observe that moderate earthquakes in northeast India are spatially clustered in five zones with distinct mechanisms: (a) thrust earthquakes within the Eastern Himalayan wedge, on north dipping low angle faults; (b) thrust earthquakes along the northern edge of Shillong Plateau, on high angle south dipping fault; (c) dextral strike-slip earthquakes along Kopili fault zone, between Shillong Plateau and Mikir Hills, extending southeast beneath Naga Fold belts; (d) dextral strike-slip earthquakes within Bengal Basin, immediately south of Shillong Plateau; and (e) deep focus (>50 km) thrust earthquakes within IBCZ. Combining with GPS geodetic observations, it is evident that the N20E convergence between India and Tibet is accommodated as elastic strain both within eastern Himalaya and regions surrounding the Shillong Plateau. We hypothesize that the strike-slip earthquakes south of the Plateau occur on re-activated continental rifts paralleling the Eocene hinge zone. Distribution of earthquake hypocenters across the IBCZ reveal active subduction of the Indian plate beneath Burma micro-plate.
NASA Astrophysics Data System (ADS)
Ross, S.; Jones, L. M.; Wilson, R. I.; Bahng, B.; Barberopoulou, A.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Johnson, L. A.; Hansen, R. A.; Kirby, S. H.; Knight, E.; Knight, W. R.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Nicolsky, D.; Oglesby, D. D.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Suleimani, E. N.; Thio, H. K.; Titov, V. V.; Wein, A. M.; Whitmore, P.; Wood, N. J.
2012-12-01
The U.S. Geological Survey's Science Application for Risk Reduction (SAFRR) project, in collaboration with the California Geological Survey, the California Emergency Management Agency, the National Oceanic and Atmospheric Administration, and other agencies and institutions are developing a Tsunami Scenario to describe in detail the impacts of a tsunami generated by a hypothetical, but realistic, M9 earthquake near the Alaska Peninsula. The overarching objective of SAFRR and its predecessor, the Multi-Hazards Demonstration Project, is to help communities reduce losses from natural disasters. As requested by emergency managers and other community partners, a primary approach has been comprehensive, scientifically credible scenarios that start with a model of a geologic event and extend through estimates of damage, casualties, and societal consequences. The first product was the ShakeOut scenario, addressing a hypothetical earthquake on the southern San Andreas fault, that spawned the successful Great California ShakeOut, an annual event and the nation's largest emergency preparedness exercise. That was followed by the ARkStorm scenario, which addresses California winter storms that surpass hurricanes in their destructive potential. Some of the Tsunami Scenario's goals include developing advanced models of currents and inundation for the event; spurring research related to Alaskan earthquake sources; engaging the port and harbor decision makers; understanding the economic impacts to local, regional and national economy in both the short and long term; understanding the ecological, environmental, and societal impacts of coastal inundation; and creating enhanced communication products for decision-making before, during, and after a tsunami event. The state of California, through CGS and Cal EMA, is using the Tsunami Scenario as an opportunity to evaluate policies regarding tsunami impact. The scenario will serve as a long-lasting resource to teach preparedness and inform decision makers. The SAFRR Tsunami Scenario is organized by a coordinating committee with several working groups, including Earthquake Source, Paleotsunami/Geology Field Work, Tsunami Modeling, Engineering and Physical Impacts, Ecological Impacts, Emergency Management and Education, Social Vulnerability, Economic and Business Impacts, and Policy. In addition, the tsunami scenario process is being assessed and evaluated by researchers from the Natural Hazards Center at the University of Colorado at Boulder. The source event, defined by the USGS' Tsunami Source Working Group, is an earthquake similar to the 2011 Tohoku event, but set in the Semidi subduction sector, between Kodiak Island and the Shumagin Islands off the Pacific coast of the Alaska Peninsula. The Semidi sector is probably late in its earthquake cycle and comparisons of the geology and tectonic settings between Tohoku and the Semidi sector suggest that this location is appropriate. Tsunami modeling and inundation results have been generated for many areas along the California coast and elsewhere, including current velocity modeling for the ports of Los Angeles, Long Beach, and San Diego, and Ventura Harbor. Work on impacts to Alaska and Hawaii will follow. Note: Costas Synolakis (USC) is also an author of this abstract.
Distribution of scientific raw data of human-caused earthquakes
NASA Astrophysics Data System (ADS)
Klose, C. D.
2012-12-01
The second catalog edition on earthquakes caused by humans is presented that was recently published in the Journal of Seismology. The earthquakes with seismic magnitudes of up to M=7.9 have been documented and published since the first half of the 20th century. They were caused by geomechanical pollution due to artificial water reservoir impoundments, underground and open-pit mining, coastal management, hydrocarbon production and fluid injections/extractions. The overall data set contains physical properties that were collected and initially summarized in an unpublished earthquake catalog presented at a meeting of Seismological Society of America in 2006. The earthquakes result from a larger set of more than 500 scientific research papers, conference proceedings and abstracts. The overall data set is made available for the public at www.cdklose.com, which includes actual properties and features, figures that explain physical relationships, and statistical correlation and regression tests. The data set can be used by students, educators, the general public or scientists from other disciplines who are interested in the environmental hazard of human-caused earthquakes.
1986-06-01
30 APPENDIX A: EARTHQUAKES AND GEOLOGY OF THE BARKLEY DAM AREA IN RELATION TO THE NEW MADRID EARTHQUAKE REGION TO...Dam is about 115 km from the source area of the New Madrid earthquakes of 1811-1812. Four major earthquakes are deduced to have occurred (Street and...hundreds of aftershocks, a dozen of which were felt over much of the central United States. Other major earthquakes that have happened in the New Madrid
Overview of Historical Earthquake Document Database in Japan and Future Development
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Satake, K.
2014-12-01
In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami heights.
Earthquake sources near Uturuncu Volcano
NASA Astrophysics Data System (ADS)
Keyson, L.; West, M. E.
2013-12-01
Uturuncu, located in southern Bolivia near the Chile and Argentina border, is a dacitic volcano that was last active 270 ka. It is a part of the Altiplano-Puna Volcanic Complex, which spans 50,000 km2 and is comprised of a series of ignimbrite flare-ups since ~23 ma. Two sets of evidence suggest that the region is underlain by a significant magma body. First, seismic velocities show a low velocity layer consistent with a magmatic sill below depths of 15-20 km. This inference is corroborated by high electrical conductivity between 10km and 30km. This magma body, the so called Altiplano-Puna Magma Body (APMB) is the likely source of volcanic activity in the region. InSAR studies show that during the 1990s, the volcano experienced an average uplift of about 1 to 2 cm per year. The deformation is consistent with an expanding source at depth. Though the Uturuncu region exhibits high rates of crustal seismicity, any connection between the inflation and the seismicity is unclear. We investigate the root causes of these earthquakes using a temporary network of 33 seismic stations - part of the PLUTONS project. Our primary approach is based on hypocenter locations and magnitudes paired with correlation-based relative relocation techniques. We find a strong tendency toward earthquake swarms that cluster in space and time. These swarms often last a few days and consist of numerous earthquakes with similar source mechanisms. Most seismicity occurs in the top 10 kilometers of the crust and is characterized by well-defined phase arrivals and significant high frequency content. The frequency-magnitude relationship of this seismicity demonstrates b-values consistent with tectonic sources. There is a strong clustering of earthquakes around the Uturuncu edifice. Earthquakes elsewhere in the region align in bands striking northwest-southeast consistent with regional stresses.
Earthquake source nucleation process in the zone of a permanently creeping deep fault
NASA Astrophysics Data System (ADS)
Lykov, V. I.; Mostryukov, A. O.
2008-10-01
The worldwide practice of earthquake prediction, whose beginning relates to the 1970s, shows that spatial manifestations of various precursors under real seismotectonic conditions are very irregular. As noted in [Kurbanov et al., 1980], zones of bending, intersection, and branching of deep faults, where conditions are favorable for increasing tangential tectonic stresses, serve as “natural amplifiers” of precursory effects. The earthquake of September 28, 2004, occurred on the Parkfield segment of the San Andreas deep fault in the area of a local bending of its plane. The fault segment about 60 km long and its vicinities are the oldest prognostic area in California. Results of observations before and after the earthquake were promptly analyzed and published in a special issue of Seismological Research Letters (2005, Vol. 76, no. 1). We have an original method enabling the monitoring of the integral rigidity of seismically active rock massifs. The integral rigidity is determined from the relative numbers of brittle and viscous failure acts during the formation of source ruptures of background earthquakes in a given massif. Fracture mechanisms are diagnosed from the steepness of the first arrival of the direct P wave. Principles underlying our method are described in [Lykov and Mostryukov, 1996, 2001, 2003]. Results of monitoring have been directly displayed at the site of the Laboratory (
An Earthquake Rupture Forecast model for central Italy submitted to CSEP project
NASA Astrophysics Data System (ADS)
Pace, B.; Peruzza, L.
2009-04-01
We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5
Earthquakes: Predicting the unpredictable?
Hough, Susan E.
2005-01-01
The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.
NASA Astrophysics Data System (ADS)
Zeng, Hai-Rong; Song, Hui-Zhen
1999-05-01
Based on three-dimensional joint finite element, this paper discusses the theory and methodology about inversion of geodetic data. The FEM and inversion formula is given in detail; also a related code is developed. By use of the Green’s function about 3-D FEM, we invert geodetic measurements of coseismic deformation of the 1989 M S=7.1 Loma Prieta earthquake to determine its source mechanism. The result indicates that the slip on the fault plane is very heterogeneous. The maximum slip and shear stress are located about 10 km to northwest of the earthquake source; the stress drop is about more than 1 MPa.
Modelling the Time Dependence of Frequency Content of Long-period Volcanic Earthquakes
NASA Astrophysics Data System (ADS)
Jousset, P.; Neuberg, J. W.
2001-12-01
Broad-band seismic networks provide a powerfull tool for the observation and analysis of volcanic earthquakes. The amplitude spectrogram allows us to follow the frequency content of these signals with time. Observed amplitude spectrograms of long-period volcanic earthquakes display distinct spectral lines sometimes varying by several Hertz over time spans of minutes to hours. We first present several examples associated with various phases of volcanic activity at Soufrière Hills volcano, Montserrat. Then, we present and discuss two mechanisms to explain such frequency changes in the spectrograms: (i) change of physical properties within the magma and, (ii) change in the triggering frequency of repeated sources within the conduit. We use 2D and 3D finite-difference modelling methods to compute the propagation of seismic waves in simplified volcanic structures: (i) we model the gliding spectral lines by introducing continuously changing magma properties during the wavefield computation; (ii) we explore the resulting pressure distribution within the conduit and its potential role in triggering further events. We obtain constraints on both amplitude and time-scales for changes of magma properties that are required to model gliding lines in amplitude spectrograms.
Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan
Satake, Kenji
2018-01-01
Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [Mw (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non–double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of Mzx, Mzy, and M{tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved. PMID:29740604
Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan.
Fukao, Yoshio; Sandanbata, Osamu; Sugioka, Hiroko; Ito, Aki; Shiobara, Hajime; Watada, Shingo; Satake, Kenji
2018-04-01
Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [ M w (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non-double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of M zx , M zy , and M {tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved.
Research on Collection of Earthquake Disaster Information from the Crowd
NASA Astrophysics Data System (ADS)
Nian, Z.
2017-12-01
In China, the assessment of the earthquake disasters information is mainly based on the inversion of the seismic source mechanism and the pre-calculated population data model, the real information of the earthquake disaster is usually collected through the government departments, the accuracy and the speed need to be improved. And in a massive earthquake like the one in Mexico, the telecommunications infrastructure on ground were damaged , the quake zone was difficult to observe by satellites and aircraft in the bad weather. Only a bit of information was sent out through maritime satellite of other country. Thus, the timely and effective development of disaster relief was seriously affected. Now Chinese communication satellites have been orbiting, people don't only rely on the ground telecom base station to keep communication with the outside world, to open the web page,to land social networking sites, to release information, to transmit images and videoes. This paper will establish an earthquake information collection system which public can participate. Through popular social platform and other information sources, the public can participate in the collection of earthquake information, and supply quake zone information, including photos, video, etc.,especially those information made by unmanned aerial vehicle (uav) after earthqake, the public can use the computer, potable terminals, or mobile text message to participate in the earthquake information collection. In the system, the information will be divided into earthquake zone basic information, earthquake disaster reduction information, earthquake site information, post-disaster reconstruction information etc. and they will been processed and put into database. The quality of data is analyzed by multi-source information, and is controlled by local public opinion on them to supplement the data collected by government departments timely and implement the calibration of simulation results ,which will better guide disaster relief scheduling and post-disaster reconstruction. In the future ,we will work hard to raise public awareness, to train their consciousness of public participation and to improve the quality of public supply data.
Long-Period Ground Motion due to Near-Shear Earthquake Ruptures
NASA Astrophysics Data System (ADS)
Koketsu, K.; Yokota, Y.; Hikima, K.
2010-12-01
Long-period ground motion has become an increasingly important consideration because of the recent rapid increase in the number of large-scale structures, such as high-rise buildings and large oil storage tanks. Large subduction-zone earthquakes and moderate to large crustal earthquakes can generate far-source long-period ground motions in distant sedimentary basins with the help of path effects. Near-fault long-period ground motions are generated, for the most part, by the source effects of forward rupture directivity (Koketsu and Miyake, 2008). This rupture directivity effect is the maximum in the direction of fault rupture when a rupture velocity is nearly equal to shear wave velocity around a source fault (Dunham and Archuleta, 2005). The near-shear rupture was found to occur during the 2008 Mw 7.9 Wenchuan earthquake at the eastern edge of the Tibetan plateau (Koketsu et al., 2010). The variance of waveform residuals in a joint inversion of teleseismic and strong motion data was the minimum when we adopted a rupture velocity of 2.8 km/s, which is close to the shear wave velocity of 2.6 km/s around the hypocenter. We also found near-shear rupture during the 2010 Mw 6.9 Yushu earthquake (Yokota et al., 2010). The optimum rupture velocity for an inversion of teleseismic data is 3.5 km/s, which is almost equal to the shear wave velocity around the hypocenter. Since, in addition, supershear rupture was found during the 2001 Mw 7.8 Central Kunlun earthquake (Bouchon and Vallee, 2003), such fast earthquake rupture can be a characteristic of the eastern Tibetan plateau. Huge damage in Yingxiu and Beichuan from the 2008 Wenchuan earthquake and damage heavier than expected in the county seat of Yushu from the medium-sized Yushu earthquake can be attributed to the maximum rupture directivity effect in the rupture direction due to near-shear earthquake ruptures.
Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.
2014-05-01
The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily performed to understand the influence of the model characteristics on the computed ground shaking scenarios. For massive parametric tests, or for the repeated generation of large scale hazard maps, the methodology can take advantage of more advanced computational platforms, ranging from GRID computing infrastructures to HPC dedicated clusters up to Cloud computing. In such a way, scientists can deal efficiently with the variety and complexity of the potential earthquake sources, and perform parametric studies to characterize the related uncertainties. NDSHA provides realistic time series of expected ground motion readily applicable for seismic engineering analysis and other mitigation actions. The methodology has been successfully applied to strategic buildings, lifelines and cultural heritage sites, and for the purpose of seismic microzoning in several urban areas worldwide. A web application is currently being developed that facilitates the access to the NDSHA methodology and the related outputs by end-users, who are interested in reliable territorial planning and in the design and construction of buildings and infrastructures in seismic areas. At the same, the web application is also shaping up as an advanced educational tool to explore interactively how seismic waves are generated at the source, propagate inside structural models, and build up ground shaking scenarios. We illustrate the preliminary results obtained from a multiscale application of NDSHA approach to the territory of India, zooming from large scale hazard maps of ground shaking at bedrock, to the definition of local scale earthquake scenarios for selected sites in the Gujarat state (NW India). The study aims to provide the community (e.g. authorities and engineers) with advanced information for earthquake risk mitigation, which is particularly relevant to Gujarat in view of the rapid development and urbanization of the region.
The Earthquake‐Source Inversion Validation (SIV) Project
Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf
2016-01-01
Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.
Possible Dual Earthquake-Landslide Source of the 13 November 2016 Kaikoura, New Zealand Tsunami
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Satake, Kenji
2017-10-01
A complicated earthquake ( M w 7.8) in terms of rupture mechanism occurred in the NE coast of South Island, New Zealand, on 13 November 2016 (UTC) in a complex tectonic setting comprising a transition strike-slip zone between two subduction zones. The earthquake generated a moderate tsunami with zero-to-crest amplitude of 257 cm at the near-field tide gauge station of Kaikoura. Spectral analysis of the tsunami observations showed dual peaks at 3.6-5.7 and 5.7-56 min, which we attribute to the potential landslide and earthquake sources of the tsunami, respectively. Tsunami simulations showed that a source model with slip on an offshore plate-interface fault reproduces the near-field tsunami observation in terms of amplitude, but fails in terms of tsunami period. On the other hand, a source model without offshore slip fails to reproduce the first peak, but the later phases are reproduced well in terms of both amplitude and period. It can be inferred that an offshore source is necessary to be involved, but it needs to be smaller in size than the plate interface slip, which most likely points to a confined submarine landslide source, consistent with the dual-peak tsunami spectrum. We estimated the dimension of the potential submarine landslide at 8-10 km.
NASA Astrophysics Data System (ADS)
Salaree, Amir; Okal, Emile A.
2018-04-01
We present a seismological and hydrodynamic investigation of the earthquake of 13 April 1923 at Ust'-Kamchatsk, Northern Kamchatka, which generated a more powerful and damaging tsunami than the larger event of 03 February 1923, thus qualifying as a so-called "tsunami earthquake". On the basis of modern relocations, we suggest that it took place outside the fault area of the mainshock, across the oblique Pacific-North America plate boundary, a model confirmed by a limited dataset of mantle waves, which also confirms the slow nature of the source, characteristic of tsunami earthquakes. However, numerical simulations for a number of legitimate seismic models fail to reproduce the sharply peaked distribution of tsunami wave amplitudes reported in the literature. By contrast, we can reproduce the distribution of reported wave amplitudes using an underwater landslide as a source of the tsunami, itself triggered by the earthquake inside the Kamchatskiy Bight.
GPS detection of ionospheric perturbations following the January 17, 1994, northridge earthquake
NASA Technical Reports Server (NTRS)
Calais, Eric; Minster, J. Bernard
1995-01-01
Sources such as atmospheric or buried explosions and shallow earthquakes producing strong vertical ground displacements produce pressure waves that propagate at infrasonic speeds in the atmosphere. At ionospheric altitudes low frequency acoustic waves are coupled to ionispheric gravity waves and induce variations in the ionoispheric electron density. Global Positioning System (GPS) data recorded in Southern California were used to compute ionospheric electron content time series for several days preceding and following the January 17, 1994, M(sub w) = 6.7 Northridge earthquake. An anomalous signal beginning several minutes after the earthquake with time delays that increase with distance from the epicenter was observed. The signal frequency and phase velocity are consistent with results from numerical models of atmospheric-ionospheric acoustic-gravity waves excited by seismic sources as well as previous electromagnetic sounding results. It is believed that these perturbations are caused by the ionospheric response to the strong ground displacement associated with the Northridge earthquake.
NASA Astrophysics Data System (ADS)
Wong, T. P.; Lee, S. J.; Gung, Y.
2017-12-01
Taiwan is located at one of the most active tectonic regions in the world. Rapid estimation of the spatial slip distribution of moderate-large earthquake (Mw6.0) is important for emergency response. It is necessary to have a real-time system to provide the report immediately after earthquake happen. The earthquake activities in the vicinity of Taiwan can be monitored by Real-Time Moment Tensor Monitoring System (RMT) which provides the rapid focal mechanism and source parameters. In this study, we follow up the RMT system to develop a near real-time finite fault source inversion system for the moderate-large earthquakes occurred in Taiwan. The system will be triggered by the RMT System when an Mw6.0 is detected. According to RMT report, our system automatically determines the fault dimension, record length, and rise time. We adopted one segment fault plane with variable rake angle. The generalized ray theory was applied to calculate the Green's function for each subfault. The primary objective of the system is to provide the first order image of coseismic slip pattern and identify the centroid location on the fault plane. The performance of this system had been demonstrated by 23 big earthquakes occurred in Taiwan successfully. The results show excellent data fits and consistent with the solutions from other studies. The preliminary spatial slip distribution will be provided within 25 minutes after an earthquake occurred.
Probabilistic Seismic Hazard Maps for Ecuador
NASA Astrophysics Data System (ADS)
Mariniere, J.; Beauval, C.; Yepes, H. A.; Laurence, A.; Nocquet, J. M.; Alvarado, A. P.; Baize, S.; Aguilar, J.; Singaucho, J. C.; Jomard, H.
2017-12-01
A probabilistic seismic hazard study is led for Ecuador, a country facing a high seismic hazard, both from megathrust subduction earthquakes and shallow crustal moderate to large earthquakes. Building on the knowledge produced in the last years in historical seismicity, earthquake catalogs, active tectonics, geodynamics, and geodesy, several alternative earthquake recurrence models are developed. An area source model is first proposed, based on the seismogenic crustal and inslab sources defined in Yepes et al. (2016). A slightly different segmentation is proposed for the subduction interface, with respect to Yepes et al. (2016). Three earthquake catalogs are used to account for the numerous uncertainties in the modeling of frequency-magnitude distributions. The hazard maps obtained highlight several source zones enclosing fault systems that exhibit low seismic activity, not representative of the geological and/or geodetical slip rates. Consequently, a fault model is derived, including faults with an earthquake recurrence model inferred from geological and/or geodetical slip rate estimates. The geodetical slip rates on the set of simplified faults are estimated from a GPS horizontal velocity field (Nocquet et al. 2014). Assumptions on the aseismic component of the deformation are required. Combining these alternative earthquake models in a logic tree, and using a set of selected ground-motion prediction equations adapted to Ecuador's different tectonic contexts, a mean hazard map is obtained. Hazard maps corresponding to the percentiles 16 and 84% are also derived, highlighting the zones where uncertainties on the hazard are highest.
Open Source Tools for Seismicity Analysis
NASA Astrophysics Data System (ADS)
Powers, P.
2010-12-01
The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.
NASA Astrophysics Data System (ADS)
Zhao, Fengfan; Meng, Lingyuan
2016-04-01
The April 20, 2013 Ms 7.0, earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process with the source mechanism and empirical relationships, estimated the strong ground motion in the near-fault field based on the Brune's circle model. A dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Moreover, we discussed the characteristics of the strong ground motion in the near-fault field, that the broadband synthetic seismogram ground motion predictions for Boxing and Lushan city produced larger peak values, shorter durations and higher frequency contents. It indicates that the factors in near-fault strong ground motion was under the influence of higher effect stress drop and asperity slip distributions on the fault plane. This work is financially supported by the Natural Science Foundation of China (Grant No. 41404045) and by Science for Earthquake Resilience of CEA (XH14055Y).
Blind source deconvolution for deep Earth seismology
NASA Astrophysics Data System (ADS)
Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.
2007-12-01
We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.
USGS GNSS Applications to Earthquake Disaster Response and Hazard Mitigation
NASA Astrophysics Data System (ADS)
Hudnut, K. W.; Murray, J. R.; Minson, S. E.
2015-12-01
Rapid characterization of earthquake rupture is important during a disaster because it establishes which fault ruptured and the extent and amount of fault slip. These key parameters, in turn, can augment in situ seismic sensors for identifying disruption to lifelines as well as localized damage along the fault break. Differential GNSS station positioning, along with imagery differencing, are important methods for augmenting seismic sensors. During response to recent earthquakes (1989 Loma Prieta, 1992 Landers, 1994 Northridge, 1999 Hector Mine, 2010 El Mayor - Cucapah, 2012 Brawley Swarm and 2014 South Napa earthquakes), GNSS co-seismic and post-seismic observations proved to be essential for rapid earthquake source characterization. Often, we find that GNSS results indicate key aspects of the earthquake source that would not have been known in the absence of GNSS data. Seismic, geologic, and imagery data alone, without GNSS, would miss important details of the earthquake source. That is, GNSS results provide important additional insight into the earthquake source properties, which in turn help understand the relationship between shaking and damage patterns. GNSS also adds to understanding of the distribution of slip along strike and with depth on a fault, which can help determine possible lifeline damage due to fault offset, as well as the vertical deformation and tilt that are vitally important for gravitationally driven water systems. The GNSS processing work flow that took more than one week 25 years ago now takes less than one second. Formerly, portable receivers needed to be set up at a site, operated for many hours, then data retrieved, processed and modeled by a series of manual steps. The establishment of continuously telemetered, continuously operating high-rate GNSS stations and the robust automation of all aspects of data retrieval and processing, has led to sub-second overall system latency. Within the past few years, the final challenges of standardization and adaptation to the existing framework of the ShakeAlert earthquake early warning system have been met, such that real-time GNSS processing and input to ShakeAlert is now routine and in use. Ongoing adaptation and testing of algorithms remain the last step towards fully operational incorporation of GNSS into ShakeAlert by USGS and its partners.
NASA Astrophysics Data System (ADS)
WANG, X.; Wei, S.; Bradley, K. E.
2017-12-01
Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike variation. The refined megathrust geometry will improve our understanding of the tectonic setting in this region, and place further constraints on rupture processes of the hazardous megathrust.
1989-09-07
features such as the northern Anatolian fault are observed. After considerable convergence and shortening within Asia, lithospheric delamination perhaps...history of an earthquake as viewed by geologists and seismologists. There are many examples of earthquakes which show an apparent poor correlation of
Some comparisons between mining-induced and laboratory earthquakes
McGarr, A.
1994-01-01
Although laboratory stick-slip friction experiments have long been regarded as analogs to natural crustal earthquakes, the potential use of laboratory results for understanding the earthquake source mechanism has not been fully exploited because of essential difficulties in relating seismographic data to measurements made in the controlled laboratory environment. Mining-induced earthquakes, however, provide a means of calibrating the seismic data in terms of laboratory results because, in contrast to natural earthquakes, the causative forces as well as the hypocentral conditions are known. A comparison of stick-slip friction events in a large granite sample with mining-induced earthquakes in South Africa and Canada indicates both similarities and differences between the two phenomena. The physics of unstable fault slip appears to be largely the same for both types of events. For example, both laboratory and mining-induced earthquakes have very low seismic efficiencies {Mathematical expression} where ??a is the apparent stress and {Mathematical expression} is the average stress acting on the fault plane to cause slip; nearly all of the energy released by faulting is consumed in overcoming friction. In more detail, the mining-induced earthquakes differ from the laboratory events in the behavior of ?? as a function of seismic moment M0. Whereas for the laboratory events ?????0.06 independent of M0, ?? depends quite strongly on M0 for each set of induced earthquakes, with 0.06 serving, apparently, as an upper bound. It seems most likely that this observed scaling difference is due to variations in slip distribution over the fault plane. In the laboratory, a stick-slip event entails homogeneous slip over a fault of fixed area. For each set of induced earthquakes, the fault area appears to be approximately fixed but the slip is inhomogeneous due presumably to barriers (zones of no slip) distributed over the fault plane; at constant {Mathematical expression}, larger events correspond to larger??a as a consequence of fewer barriers to slip. If the inequality ??a/ {Mathematical expression} ??? 0.06 has general validity, then measurements of ??a=??Ea/M0, where ?? is the modulus of rigidity and Ea is the seismically-radiated energy, can be used to infer the absolute level of deviatoric stress at the hypocenter. ?? 1994 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.
2013-05-01
We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico
A trial of reliable estimation of non-double-couple component of microearthquakes
NASA Astrophysics Data System (ADS)
Imanishi, K.; Uchide, T.
2017-12-01
Although most tectonic earthquakes are caused by shear failure, it has been reported that injection-induced seismicity and earthquakes occurring in volcanoes and geothermal areas contain non double couple (non-DC) components (e.g, Dreger et al., 2000). Also in the tectonic earthquakes, small non-DC components are beginning to be detected (e.g, Ross et al., 2015). However, it is generally limited to relatively large earthquakes that the non-DC component can be estimated with sufficient accuracy. In order to gain further understanding of fluid-driven earthquakes and fault zone properties, it is important to estimate full moment tensor of many microearthquakes with high precision. In the last AGU meeting, we proposed a method that iteratively applies the relative moment tensor inversion (RMTI) (Dahm, 1996) to source clusters improving each moment tensor as well as their relative accuracy. This new method overcomes the problem of RMTI that errors in the mechanism of reference events lead to biased solutions for other events, while taking advantage of RMTI that the source mechanisms can be determined without a computation of Green's function. The procedure is briefly summarized as follows: (1) Sample co-located multiple earthquakes with focal mechanisms, as initial solutions, determined by an ordinary method. (2) Apply the RMTI to estimate the source mechanism of each event relative to those of the other events. (3) Repeat the step 2 for the modified source mechanisms until the reduction of total residual converges. In order to confirm whether the method can resolve non-DC components, we conducted numerical tests on synthetic data. Amplitudes were computed assuming non-DC sources, amplifying by factor between 0.2 and 4 as site effects, and adding 10% random noise. As initial solutions in the step 1, we gave DC sources with arbitrary strike, dip and rake angle. In a test with eight sources at 12 stations, for example, all solutions were successively improved by iteration. Non-DC components were successfully resolved in spite of the fact that we gave DC sources as initial solutions. The application of the method to microearthquakes in geothermal area in Japan will be presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
... modifications which would replace the existing source range and intermediate range excore detector systems with... excore detector systems with equivalent neutron monitoring systems. The new instrumentation will perform... the design earthquake, the double design earthquake, the Hosgri earthquake, and the loss-of-coolant...
NASA Astrophysics Data System (ADS)
Boyd, O. S.; Dreger, D. S.; Gritto, R.
2015-12-01
Enhanced Geothermal Systems (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. We investigate seismicity in the vicinity of the EGS development at The Geysers Prati-32 injection well to determine moment magnitude, focal mechanism, and kinematic finite-source models with the goal of developing a rupture area scaling relationship for the Geysers and specifically for the Prati-32 EGS injection experiment. Thus far we have analyzed moment tensors of M ≥ 2 events, and are developing the capability to analyze the large numbers of events occurring as a result of the fluid injection and to push the analysis to smaller magnitude earthquakes. We have also determined finite-source models for five events ranging in magnitude from M 3.7 to 4.5. The scaling relationship between rupture area and moment magnitude of these events resembles that of a published empirical relationship derived for events from M 4.5 to 8.3. We plan to develop a scaling relationship in which moment magnitude and corner frequency are predictor variables for source rupture area constrained by the finite-source modeling. Inclusion of corner frequency in the empirical scaling relationship is proposed to account for possible variations in stress drop. If successful, we will use this relationship to extrapolate to the large numbers of events in the EGS seismicity cloud to estimate the coseismic fracture density. We will present the moment tensor and corner frequency results for the micro earthquakes, and for select events, finite-source models. Stress drop inferred from corner frequencies and from finite-source modeling will be compared.
Slip reactivation during the 2011 Tohoku earthquake: Dynamic rupture and ground motion simulations
NASA Astrophysics Data System (ADS)
Galvez, P.; Dalguer, L. A.
2013-12-01
The 2011 Mw9 Tohoku earthquake generated such as vast geophysical data that allows studying with an unprecedented resolution the spatial-temporal evolution of the rupture process of a mega thrust event. Joint source inversion of teleseismic, near-source strong motion and coseismic geodetic data , e.g [Lee et. al, 2011], reveal an evidence of slip reactivation process at areas of very large slip. The slip of snapshots of this source model shows that after about 40 seconds the big patch above to the hypocenter experienced an additional push of the slip (reactivation) towards the trench. These two possible repeating slip exhibited by source inversions can create two waveform envelops well distinguished in the ground motion pattern. In fact seismograms of the KiK-Net Japanese network contained this pattern. For instance a seismic station around Miyagi (MYGH10) has two main wavefronts separated between them by 40 seconds. A possible physical mechanism to explain the slip reactivation could be a thermal pressurization process occurring in the fault zone. In fact, Kanamori & Heaton, (2000) proposed that for large earthquakes frictional melting and fluid pressurization can play a key role of the rupture dynamics of giant earthquakes. If fluid exists in a fault zone, an increase of temperature can rise up the pore pressure enough to significantly reduce the frictional strength. Therefore, during a large earthquake the areas of big slip persuading strong thermal pressurization may result in a second drop of the frictional strength after reaching a certain value of slip. Following this principle, we adopt for slip weakening friction law and prescribe a certain maximum slip after which the friction coefficient linearly drops down again. The implementation of this friction law has been done in the latest unstructured spectral element code SPECFEM3D, Peter et. al. (2012). The non-planar subduction interface has been taken into account and place on it a big asperity patch inside areas of big slip (>50m) close to the trench. Within the first 2km bellow the trench a negative stress drop has been imposed in order to represent the energy absorption zone that attenuates a high frequency radiation at the shallow part of the suduction zone. At down dip, where high frequency radiation burst has been detected from back projection techniques, e.g. [Meng et. al, 2011; Ishi , 2011], small asperities has been considered in our dynamic rupture model. Finally, a comparison of static geodetic free surface displacement and synthetics has been made to obtain our best model. We additionally compare seismograms with the aim to represent the main features of the strong ground motion recorded from this earthquake. Moreover, the spatial-temporal rupture evolution detected by back projection at down dip is in a good agreement with the rupture evolution of our dynamic model.
Where do we stand after twenty years of dynamic triggering studies? (Invited)
NASA Astrophysics Data System (ADS)
Prejean, S. G.; Hill, D. P.
2013-12-01
In the past two decades, remote dynamic triggering of earthquakes by other earthquakes has been explored in a variety of physical environments with a wide array of observation and modeling techniques. These studies have significantly refined our understanding of the state of the crust and the physical conditions controlling earthquake nucleation. Despite an ever growing database of dynamic triggering observations, significant uncertainties remain and vigorous debate in almost all aspects of the science continues. For example, although dynamic earthquake triggering can occur with peak dynamic stresses as small as 1 kPa, triggering thresholds and their dependence on local stress state, hydrological environment, and frictional properties of faults are not well understood. Some studies find a simple threshold based on the peak amplitude of shaking while others find dependencies on frequency, recharge time, and other parameters. Considerable debate remains over the range of physical processes responsible for dynamic triggering, and the wide variation in dynamic triggering responses and time scales suggests triggering by multiple physical processes. Although Coulomb shear failure with various friction laws can often explain dynamic triggering, particularly instantaneous triggering, delayed dynamic triggering may be dependent on fluid transport and other slowly evolving aseismic processes. Although our understanding of the global distribution of dynamic triggering has improved, it is far from complete due to spatially uneven monitoring. A major challenge involves establishing statistical significance of potentially triggered earthquakes, particularly if they are isolated events or time-delayed with respect to triggering stresses. Here we highlight these challenges and opportunities with existing data. We focus on environmental dependence of dynamic triggering by large remote earthquakes particularly in volcanic and geothermal systems, as these systems often have high rates of background seismicity. In many volcanic and geothermal systems, such as the Geysers in Northern California, dynamic triggering of micro-earthquakes is frequent and predictable. In contrast, most active and even erupting volcanoes in Alaska (with the exception of the Katmai Volcanic Cluster) do not experience dynamic triggering. We explore why.
The network construction of CSELF for earthquake monitoring and its preliminary observation
NASA Astrophysics Data System (ADS)
Tang, J.; Zhao, G.; Chen, X.; Bing, H.; Wang, L.; Zhan, Y.; Xiao, Q.; Dong, Z.
2017-12-01
The Electromagnetic (EM) anomaly in short-term earthquake precursory is most sensitive physical phenomena. Scientists believe that EM monitoring for earthquake is one of the most promising means of forecasting. However, existing ground-base EM observation confronted with increasing impact cultural noises, and the lack of a frequency range of higher than 1Hz observations. Control source of extremely low frequency (CSELF) EM is a kind of good prospective new approach. It not only has many advantages with high S/N ratio, large coverage area, probing depth ect., thereby facilitating the identification and capture anomaly signal, and it also can be used to study the electromagnetic field variation and to study the crustal medium changes of the electric structure.The first CSELF EM network for earthquake precursory monitoring with 30 observatories in China has been constructed. The observatories distribute in Beijing surrounding area and in the southern part of North-South Seismic Zone. GMS-07 system made by Metronix is equipped at each station. The observation mixed CSELF and nature source, that is, if during the control source is off transmitted, the nature source EM signal will be recorded. In genernal, there are 3 5 frequencies signals in the 0.1-300Hz frequency band will be transmit in every morning and evening in a fixed time (length 2 hours). Besides time, natural field to extend the frequency band (0.001 1000 Hz) will be observed by using 3 sample frequencies, 4096Hz sampling rate for HF, 256Hz for MF and 16Hz for LF. The low frequency band records continuously all-day and the high and medium frequency band use a slices record, the data records by cycling acquisition in every 10 minutes with length of about 4 to 8 seconds and 64 to 128 seconds , respectively. All the data is automatically processed by server installed in the observatory. The EDI file including EM field spectrums and MT responses and time series files will be sent the data center by internet. There shows observation data since the network set up. We get some EM field spectrum variations and the apparent resistivity changes of different frequencies with time on observatories. They show some regular and irregular changes. This study is supported by The ELF Engineering Project of China (15212Z0000001), National Natural Science Foundation of China (41674081) etc.
U.S. Geological Survey (USGS) Earthquake Web Applications
NASA Astrophysics Data System (ADS)
Fee, J.; Martinez, E.
2015-12-01
USGS Earthquake web applications provide access to earthquake information from USGS and other Advanced National Seismic System (ANSS) contributors. One of the primary goals of these applications is to provide a consistent experience for accessing both near-real time information as soon as it is available and historic information after it is thoroughly reviewed. Millions of people use these applications every month including people who feel an earthquake, emergency responders looking for the latest information about a recent event, and scientists researching historic earthquakes and their effects. Information from multiple catalogs and contributors is combined by the ANSS Comprehensive Catalog into one composite catalog, identifying the most preferred information from any source for each event. A web service and near-real time feeds provide access to all contributed data, and are used by a number of users and software packages. The Latest Earthquakes application displays summaries of many events, either near-real time feeds or custom searches, and the Event Page application shows detailed information for each event. Because all data is accessed through the web service, it can also be downloaded by users. The applications are maintained as open source projects on github, and use mobile-first and responsive-web-design approaches to work well on both mobile devices and desktop computers. http://earthquake.usgs.gov/earthquakes/map/
Spatial modeling for estimation of earthquakes economic loss in West Java
NASA Astrophysics Data System (ADS)
Retnowati, Dyah Ayu; Meilano, Irwan; Riqqi, Akhmad; Hanifa, Nuraini Rahma
2017-07-01
Indonesia has a high vulnerability towards earthquakes. The low adaptive capacity could make the earthquake become disaster that should be concerned. That is why risk management should be applied to reduce the impacts, such as estimating the economic loss caused by hazard. The study area of this research is West Java. The main reason of West Java being vulnerable toward earthquake is the existence of active faults. These active faults are Lembang Fault, Cimandiri Fault, Baribis Fault, and also Megathrust subduction zone. This research tries to estimates the value of earthquakes economic loss from some sources in West Java. The economic loss is calculated by using HAZUS method. The components that should be known are hazard (earthquakes), exposure (building), and the vulnerability. Spatial modeling is aimed to build the exposure data and make user get the information easier by showing the distribution map, not only in tabular data. As the result, West Java could have economic loss up to 1,925,122,301,868,140 IDR ± 364,683,058,851,703.00 IDR, which is estimated from six earthquake sources with maximum possibly magnitude. However, the estimation of economic loss value in this research is the worst case earthquakes occurrence which is probably over-estimated.
Earthquake Occurrence in Bangladesh and Surrounding Region
NASA Astrophysics Data System (ADS)
Al-Hussaini, T. M.; Al-Noman, M.
2011-12-01
The collision of the northward moving Indian plate with the Eurasian plate is the cause of frequent earthquakes in the region comprising Bangladesh and neighbouring India, Nepal and Myanmar. Historical records indicate that Bangladesh has been affected by five major earthquakes of magnitude greater than 7.0 (Richter scale) during 1869 to 1930. This paper presents some statistical observations of earthquake occurrence in fulfilment of a basic groundwork for seismic hazard assessment of this region. An up to date catalogue covering earthquake information in the region bounded within 17°-30°N and 84°-97°E for the period of historical period to 2010 is derived from various reputed international sources including ISC, IRIS, Indian sources and available publications. Careful scrutiny is done to remove duplicate or uncertain earthquake events. Earthquake magnitudes in the range of 1.8 to 8.1 have been obtained and relationships between different magnitude scales have been studied. Aftershocks are removed from the catalogue using magnitude dependent space window and time window. The main shock data are then analyzed to obtain completeness period for different magnitudes evaluating their temporal homogeneity. Spatial and temporal distribution of earthquakes, magnitude-depth histograms and other statistical analysis are performed to understand the distribution of seismic activity in this region.
Long-term changes in regular and low-frequency earthquake inter-event times near Parkfield, CA
NASA Astrophysics Data System (ADS)
Wu, C.; Shelly, D. R.; Johnson, P. A.; Gomberg, J. S.; Peng, Z.
2012-12-01
The temporal evolution of earthquake inter-event time may provide important clues for the timing of future events and underlying physical mechanisms of earthquake nucleation. In this study, we examine inter-event times from 12-yr catalogs of ~50,000 earthquakes and ~730,000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault. We focus on the long-term evolution of inter-event times after the 2003 Mw6.5 San Simeon and 2004 Mw6.0 Parkfield earthquakes. We find that inter-event times decrease by ~4 orders of magnitudes after the Parkfield and San Simeon earthquakes and are followed by a long-term recovery with time scales of ~3 years and more than 8 years for earthquakes along and to the southwest of the San Andreas fault, respectively. The differing long-term recovery of the earthquake inter-event times is likely a manifestation of different aftershock recovery time scales that reflect the different tectonic loading rates in the two regions. We also observe a possible decrease of LFE inter-event times in some LFE families, followed by a recovery with time scales of ~4 months to several years. The drop in the recurrence time of LFE after the Parkfield earthquake is likely caused by a combination of the dynamic and positive static stress induced by the Parkfield earthquake, and the long-term recovery in LFE recurrence time could be due to post-seismic relaxation or gradual recovery of the fault zone material properties. Our on-going work includes better constraining and understanding the physical mechanisms responsible for the observed long-term recovery in earthquake and LFE inter-event times.
An interdisciplinary approach to study Pre-Earthquake processes
NASA Astrophysics Data System (ADS)
Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.
2017-12-01
We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.
Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.
2008-01-01
We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.
NASA Astrophysics Data System (ADS)
Singh, A. P.; Mishra, O. P.
2015-10-01
In order to understand the processes involved in the genesis of monsoon induced micro to moderate earthquakes after heavy rainfall during the Indian summer monsoon period beneath the 2011 Talala, Saurashtra earthquake (Mw 5.1) source zone, we assimilated 3-D microstructures of the sub-surface rock materials using a data set recorded by the Seismic Network of Gujarat (SeisNetG), India. Crack attributes in terms of crack density (ε), the saturation rate (ξ) and porosity parameter (ψ) were determined from the estimated 3-D sub-surface velocities (Vp, Vs) and Poisson's ratio (σ) structures of the area at varying depths. We distinctly imaged high-ε, high-ξ and low-ψ anomalies at shallow depths, extending up to 9-15 km. We infer that the existence of sub-surface fractured rock matrix connected to the surface from the source zone may have contributed to the changes in differential strain deep down to the crust due to the infiltration of rainwater, which in turn induced micro to moderate earthquake sequence beneath Talala source zone. Infiltration of rainwater during the Indian summer monsoon might have hastened the failure of the rock by perturbing the crustal volume strain of the causative source rock matrix associated with the changes in the seismic moment release beneath the surface. Analyses of crack attributes suggest that the fractured volume of the rock matrix with high porosity and lowered seismic strength beneath the source zone might have considerable influence on the style of fault displacements due to seismo-hydraulic fluid flows. Localized zone of micro-cracks diagnosed within the causative rock matrix connected to the water table and their association with shallow crustal faults might have acted as a conduit for infiltrating the precipitation down to the shallow crustal layers following the fault suction mechanism of pore pressure diffusion, triggering the monsoon induced earthquake sequence beneath the source zone.
Estimating Source Duration for Moderate and Large Earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei
2017-04-01
Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).
NASA Astrophysics Data System (ADS)
Ghica, D.; Ionescu, C.
2012-04-01
Plostina seismo-acoustic array has been recently deployed by the National Institute for Earth Physics in the central part of Romania, near the Vrancea epicentral area. The array has a 2.5 km aperture and consists of 7 seismic sites (PLOR) and 7 collocated infrasound instruments (IPLOR). The array is being used to assess the importance of collocated seismic and acoustic sensors for the purposes of (1) seismic monitoring of the local and regional events, and (2) acoustic measurement, consisting of detection of the infrasound events (explosions, mine and quarry blasts, earthquakes, aircraft etc.). This paper focuses on characterization of infrasonic and seismic signals from the earthquakes and explosions (accidental and mining type). Two Vrancea earthquakes with magnitude above 5.0 were selected to this study: one occurred on 1st of May 2011 (MD = 5.3, h = 146 km), and the other one, on 4th October 2011 (MD = 5.2, h = 142 km). The infrasonic signals from the earthquakes have the appearance of the vertical component of seismic signals. Because the mechanism of the infrasonic wave formation is the coupling of seismic waves with the atmosphere, trace velocity values for such signals are compatible with the characteristics of the various seismic phases observed with PLOR array. The study evaluates and characterizes, as well, infrasound and seismic data recorded from the explosion caused by the military accident produced at Evangelos Florakis Naval Base, in Cyprus, on 11th July 2011. Additionally, seismo-acoustic signals presumed to be related to strong mine and quarry blasts were investigated. Ground truth of mine observations provides validation of this interpretation. The combined seismo-acoustic analysis uses two types of detectors for signal identification: one is the automatic detector DFX-PMCC, applied for infrasound detection and characterization, while the other one, which is used for seismic data, is based on array processing techniques (beamforming and frequency-wave number analysis). Spectrograms of the recorded infrasonic and seismic data were examined, showing that an earthquake produces acoustic signals with a high energy in the 1 to 5 Hz frequency range, while, for the explosion, this range lays below 0.6 Hz. Using the combined analysis of the seismic and acoustic data, Plostina array can greatly enhance the event detection and localization in the region. The analysis can be, as well, particularly important in identifying sources of industrial explosion, and therefore, in monitoring of the hazard created both by earthquakes and anthropogenic sources of pollution (chemical factories, nuclear and power plants, refineries, mines).
Earthquake Prediction in Large-scale Faulting Experiments
NASA Astrophysics Data System (ADS)
Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.
2004-12-01
We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of nucleation in these experiments is consistent with observations and theory of Dieterich and Kilgore (1996). Precursory strains can be detected typically after 50% of the total loading time. The Dieterich and Kilgore approach implies an alternative method of earthquake prediction based on comparing real-time strain monitoring with previous precursory strain records or with physically-based models of accelerating slip. Near failure, time to failure t is approximately inversely proportional to precursory slip rate V. Based on a least squares fit to accelerating slip velocity from ten or more events, the standard deviation of the residual between predicted and observed log t is typically 0.14. Scaling these results to natural recurrence suggests that a year prior to an earthquake, failure time can be predicted from measured fault slip rate with a typical error of 140 days, and a day prior to the earthquake with a typical error of 9 hours. However, such predictions require detecting aseismic nucleating strains, which have not yet been found in the field, and on distinguishing earthquake precursors from other strain transients. There is some field evidence of precursory seismic strain for large earthquakes (Bufe and Varnes, 1993) which may be related to our observations. In instances where precursory activity is spatially variable during the interseismic period, as in our experiments, distinguishing precursory activity might be best accomplished with deep arrays of near fault instruments and pattern recognition algorithms such as principle component analysis (Rundle et al., 2000).
Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard
NASA Astrophysics Data System (ADS)
Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.
2014-12-01
Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available for further slip and for subsequent earthquakes. This suite of models reveals that efficiency may be a useful tool for determining the relative seismic hazard of different segmented fault systems, while accounting for coseismic damage zone production is critical in assessing fault interactions and the associated energy budgets of specific systems.
Stress Drop and Directivity Patterns Observed in Small-Magnitude (
NASA Astrophysics Data System (ADS)
Ruhl, C. J.; Hatch, R. L.; Abercrombie, R. E.; Smith, K.
2017-12-01
Recent improvements in seismic instrumentation and network coverage in the Reno, NV area have provided high-quality records of abundant microseismicity, including several swarms and clusters. Here, we discuss stress drop and directivity patterns of small-magnitude seismicity in the 2008 Mw4.9 Mogul earthquake swarm in Reno, NV and in the nearby region of an ML3.2 sequence near Virginia City, NV. In both sequences, double-difference relocated earthquakes cluster on multiple distinct structures consistent with focal mechanism and moment tensor fault plane solutions. Both sequences also show migration potentially related to fluid flow. We estimate corner frequency and stress drop using EGF-derived spectral ratios, convolving earthquake pairs (target*EGF) such that we preserve phase and recover source-time functions (STF) on a station-by-station basis. We then stack individual STFs per station for all EGF-target pairs per target earthquake, increasing the signal-to-noise of our results. By applying an azimuthal- and incidence-angle-dependent stretching factor to STFs in the time domain, we are able to invert for rupture directivity and velocity assuming both unilateral and bilateral rupture. Earthquakes in both sequences, some as low as ML2.1, show strong unilateral directivity consistent with independent fault plane solutions. We investigate and compare the relationship between rupture and migration directions on subfaults within each sequence. Average stress drops for both sequences are 4 MPa, but there is large variation in individual estimates for both sequences. Although this variation is not explained simply by any one parameter (e.g., depth), spatiotemporal variation in the Mogul swarm is distinct: coherent clusters of high and low stress drop earthquakes along the mainshock fault plane are seen, and high-stress-drop foreshocks correlate with an area of reduced aftershock productivity. These observations are best explained by a difference in rheology along the fault plane. The unprecedented detail achieved for these small magnitude earthquakes confirms that stress drop, when measured precisely, is a valuable observation of physically-meaningful fault zone properties and earthquake behavior.
Numerical reconstruction of tsunami source using combined seismic, satellite and DART data
NASA Astrophysics Data System (ADS)
Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor
2014-05-01
Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.
42 CFR 412.25 - Excluded hospital units: Common requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... physical facility or because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (c... (ii) Because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (5) For cost...
42 CFR 412.25 - Excluded hospital units: Common requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... physical facility; or (ii) Because of catastrophic events such as fires, floods, earthquakes, or tornadoes... as fires, floods, earthquakes, or tornadoes. (5) For cost reporting periods beginning on or after...
42 CFR 412.25 - Excluded hospital units: Common requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... physical facility or because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (c... (ii) Because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (5) For cost...
42 CFR 412.25 - Excluded hospital units: Common requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... physical facility or because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (c... (ii) Because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (5) For cost...
42 CFR 412.25 - Excluded hospital units: Common requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... physical facility or because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (c... (ii) Because of catastrophic events such as fires, floods, earthquakes, or tornadoes. (5) For cost...
Three-dimensional ground-motion simulations of earthquakes for the Hanford area, Washington
Frankel, Arthur; Thorne, Paul; Rohay, Alan
2014-01-01
This report describes the results of ground-motion simulations of earthquakes using three-dimensional (3D) and one-dimensional (1D) crustal models conducted for the probabilistic seismic hazard assessment (PSHA) of the Hanford facility, Washington, under the Senior Seismic Hazard Analysis Committee (SSHAC) guidelines. The first portion of this report demonstrates that the 3D seismic velocity model for the area produces synthetic seismograms with characteristics (spectral response values, duration) that better match those of the observed recordings of local earthquakes, compared to a 1D model with horizontal layers. The second part of the report compares the response spectra of synthetics from 3D and 1D models for moment magnitude (M) 6.6–6.8 earthquakes on three nearby faults and for a dipping plane wave source meant to approximate regional S-waves from a Cascadia great earthquake. The 1D models are specific to each site used for the PSHA. The use of the 3D model produces spectral response accelerations at periods of 0.5–2.0 seconds as much as a factor of 4.5 greater than those from the 1D models for the crustal fault sources. The spectral accelerations of the 3D synthetics for the Cascadia plane-wave source are as much as a factor of 9 greater than those from the 1D models. The differences between the spectral accelerations for the 3D and 1D models are most pronounced for sites with thicker supra-basalt sediments and for stations with earthquakes on the Rattlesnake Hills fault and for the Cascadia plane-wave source.
Crowd-Sourced Global Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.
2014-12-01
Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.
NASA Astrophysics Data System (ADS)
Kausel, Edgar; Campos, Jaime
1992-08-01
The only known great ( Ms = 8) intermediate depth earthquake localized downdip of the main thrust zone of the Chilean subduction zone occurred landward of Antofagasta on 9 December 1950. In this paper we determine the source parameters and rupture process of this shock by modeling long-period body waves. The source mechanism corresponds to a downdip tensional intraplate event rupturing along a nearly vertical plane with a seismic moment of M0 = 1 × 10 28 dyn cm, of strike 350°, dip 88°, slip 270°, Mw = 7.9 and a stress drop of about 100 bar. The source time function consists of two subevents, the second being responsible for 70% of the total moment release. The unusually large magnitude ( Ms = 8) of this intermediate depth event suggests a rupture through the entire lithosphere. The spatial and temporal stress regime in this region is discussed. The simplest interpretation suggests that a large thrust earthquake should follow the 1950 tensional shock. Considering that the historical record of the region does not show large earthquakes, a 'slow' earthquake can be postulated as an alternative mechanism to unload the thrust zone. A weakly coupled subduction zone—within an otherwise strongly coupled region as evidenced by great earthquakes to the north and south—or the existence of creep are not consistent with the occurrence of a large tensional earthquake in the subducting lithosphere downdip of the thrust zone. The study of focal mechanisms of the outer rise earthquakes would add more information which would help us to infer the present state of stress in the thrust region.
Towards routine determination of focal mechanisms obtained from first motion P-wave arrivals
NASA Astrophysics Data System (ADS)
Lentas, K.
2018-03-01
The Bulletin of the International Seismological Centre (ISC) contains information on earthquake mechanisms collected from many different sources including national and global agencies, resulting in a satisfactory coverage over a wide magnitude range (M ˜2-9). Nevertheless, there are still a vast number of earthquakes with no reported source mechanisms especially for magnitudes up to 5. This study investigates the possibility of calculating earthquake focal mechanisms in a routine and systematic way based on P-wave first motion polarities. Any available parametric data in the ISC database is being used, as well as auto-picked polarities from waveform data up to teleseismic epicentral distances (90°) for stations that are not reported to the ISC. The determination of the earthquake mechanisms is carried out with a modified version of the HASH algorithm that is compatible with a wide range of epicentral distances and takes into account the ellipsoids defined by the ISC location errors, and the Earth's structure uncertainties. Initially, benchmark tests for a set of ISC reviewed earthquakes (mb > 4.5) are carried out and the HASH mechanism classification scheme is used to define the mechanism quality. Focal mechanisms of quality A, B and C with an azimuthal gap up to 90° compare well to the benchmark mechanisms. Nevertheless, the majority of the obtained mechanisms fall into class D as a result of limited polarity data from stations in local/regional epicentral distances. Specifically, the computation of the minimum rotation angle between the obtained mechanisms and the benchmarks, reveals that 41 per cent of the examined earthquakes show rotation angles up to 35°. Finally, the current technique is applied to a small set of earthquakes from the reviewed ISC bulletin where 62 earthquakes, with no previously reported source mechanisms, are successfully obtained.
NASA Astrophysics Data System (ADS)
Anggraini, Ade; Sobiesiak, Monika; Walter, Thomas R.
2010-05-01
The Mw 6.3 May 26, 2006 Yogyakarta Earthquake caused severe damage and claimed thousands lives in the Yogyakarta Special Province and Klaten District of Central Java Province. The nearby Opak River fault was thought to be the source of this earthquake disaster. However, no significant surface movement was observed along the fault which could confirm that this fault was really the source of the earthquake. To investigate the earthquake source and to understand the earthquake mechanism, a rapid response team of the German Task Force for Earthquake, together with the Seismological Division of Badan Meteorologi Klimatologi dan Geofisika and Gadjah Mada University in Yogyakarta, had installed a temporary seismic network of 12 short period seismometers. More than 3000 aftershocks were recorded during the 3-month campaign. Here we present the result of several hundred processed aftershocks. We used integrated software package GIANTPitsa to pick P and S phases manually and HYPO71 to determine the hypocenters. HypoDD software was used for hypocenters relocation to obtain high precision aftershock locations. Our aftershock distribution shows a system of lineaments in southwest-northeast direction, about 10 km east to Opak River fault, at 5-18 km depth. The b-value map from the aftershocks shows that the main lineaments have relatively low b-value at the middle part which suggests this part is still under stress. We also observe several aftershock clusters cutting these lineaments in nearly perpendicular direction. To verify the interpretation of our aftershocks analysis, we will overlay it on surface feature we delineate from satellite data. Hopefully our result will give significant contribution to understand the near surface fault systems around Yogyakarta Area in order to mitigate similar earthquake hazard in the future.