Sample records for earthquake generation processes

  1. Earthquake mechanism and seafloor deformation for tsunami generation

    USGS Publications Warehouse

    Geist, Eric L.; Oglesby, David D.; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Tsunamis are generated in the ocean by rapidly displacing the entire water column over a significant area. The potential energy resulting from this disturbance is balanced with the kinetic energy of the waves during propagation. Only a handful of submarine geologic phenomena can generate tsunamis: large-magnitude earthquakes, large landslides, and volcanic processes. Asteroid and subaerial landslide impacts can generate tsunami waves from above the water. Earthquakes are by far the most common generator of tsunamis. Generally, earthquakes greater than magnitude (M) 6.5–7 can generate tsunamis if they occur beneath an ocean and if they result in predominantly vertical displacement. One of the greatest uncertainties in both deterministic and probabilistic hazard assessments of tsunamis is computing seafloor deformation for earthquakes of a given magnitude.

  2. Laboratory generated M -6 earthquakes

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  3. Simulation of Earthquake-Generated Sea-Surface Deformation

    NASA Astrophysics Data System (ADS)

    Vogl, Chris; Leveque, Randy

    2016-11-01

    Earthquake-generated tsunamis can carry with them a powerful, destructive force. One of the most well-known, recent examples is the tsunami generated by the Tohoku earthquake, which was responsible for the nuclear disaster in Fukushima. Tsunami simulation and forecasting, a necessary element of emergency procedure planning and execution, is typically done using the shallow-water equations. A typical initial condition is that using the Okada solution for a homogeneous, elastic half-space. This work focuses on simulating earthquake-generated sea-surface deformations that are more true to the physics of the materials involved. In particular, a water layer is added on top of the half-space that models the seabed. Sea-surface deformations are then simulated using the Clawpack hyperbolic PDE package. Results from considering the water layer both as linearly elastic and as "nearly incompressible" are compared to that of the Okada solution.

  4. Strong ground motions generated by earthquakes on creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.; Abrahamson, Norman A.

    2014-01-01

    A tenet of earthquake science is that faults are locked in position until they abruptly slip during the sudden strain-relieving events that are earthquakes. Whereas it is expected that locked faults when they finally do slip will produce noticeable ground shaking, what is uncertain is how the ground shakes during earthquakes on creeping faults. Creeping faults are rare throughout much of the Earth's continental crust, but there is a group of them in the San Andreas fault system. Here we evaluate the strongest ground motions from the largest well-recorded earthquakes on creeping faults. We find that the peak ground motions generated by the creeping fault earthquakes are similar to the peak ground motions generated by earthquakes on locked faults. Our findings imply that buildings near creeping faults need to be designed to withstand the same level of shaking as those constructed near locked faults.

  5. Detailed source process of the 2007 Tocopilla earthquake.

    NASA Astrophysics Data System (ADS)

    Peyrat, S.; Madariaga, R.; Campos, J.; Asch, G.; Favreau, P.; Bernard, P.; Vilotte, J.

    2008-05-01

    We investigated the detail rupture process of the Tocopilla earthquake (Mw 7.7) of the 14 November 2007 and of the main aftershocks that occurred in the southern part of the North Chile seismic gap using strong motion data. The earthquake happen in the middle of the permanent broad band and strong motion network IPOC newly installed by GFZ and IPGP, and of a digital strong-motion network operated by the University of Chile. The Tocopilla earthquake is the last large thrust subduction earthquake that occurred since the major Iquique 1877 earthquake which produced a destructive tsunami. The Arequipa (2001) and Antofagasta (1995) earthquakes already ruptured the northern and southern parts of the gap, and the intraplate intermediate depth Tarapaca earthquake (2005) may have changed the tectonic loading of this part of the Peru-Chile subduction zone. For large earthquakes, the depth of the seismic rupture is bounded by the depth of the seismogenic zone. What controls the horizontal extent of the rupture for large earthquakes is less clear. Factors that influence the extent of the rupture include fault geometry, variations of material properties and stress heterogeneities inherited from the previous ruptures history. For subduction zones where structures are not well known, what may have stopped the rupture is not obvious. One crucial problem raised by the Tocopilla earthquake is to understand why this earthquake didn't extent further north, and at south, what is the role of the Mejillones peninsula that seems to act as a barrier. The focal mechanism was determined using teleseismic waveforms inversion and with a geodetic analysis (cf. Campos et al.; Bejarpi et al., in the same session). We studied the detailed source process using the strong motion data available. This earthquake ruptured the interplate seismic zone over more than 150 km and generated several large aftershocks, mainly located south of the rupture area. The strong-motion data show clearly two S

  6. Duration of Tsunami Generation Longer than Duration of Seismic Wave Generation in the 2011 Mw 9.0 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Korenaga, M.; Kawaji, K.; Akiyama, S.

    2013-12-01

    We try to compare and evaluate the nature of tsunami generation and seismic wave generation in occurrence of the 2011 Tohoku-Oki earthquake (hereafter, called as TOH11), in terms of two type of moment rate functions, inferred from finite source imaging of tsunami waveforms and seismic waveforms. Since 1970's, the nature of "tsunami earthquakes" has been discussed in many researches (e.g. Kanamori, 1972; Kanamori and Kikuchi, 1993; Kikuchi and Kanamori, 1995; Ide et al., 1993; Satake, 1994) mostly based on analysis of seismic waveform data , in terms of the "slow" nature of tsunami earthquakes (e.g., the 1992 Nicaragura earthquake). Although TOH11 is not necessarily understood as a tsunami earthquake, TOH11 is one of historical earthquakes that simultaneously generated large seismic waves and tsunami. Also, TOH11 is one of earthquakes which was observed both by seismic observation network and tsunami observation network around the Japanese islands. Therefore, for the purpose of analyzing the nature of tsunami generation, we try to utilize tsunami waveform data as much as possible. In our previous studies of TOH11 (Fujihara et al., 2012a; Fujihara et al., 2012b), we inverted tsunami waveforms at GPS wave gauges of NOWPHAS to image the spatio-temporal slip distribution. The "temporal" nature of our tsunami source model is generally consistent with the other tsunami source models (e.g., Satake et al, 2013). For seismic waveform inversion based on 1-D structure, here we inverted broadband seismograms at GSN stations based on the teleseismic body-wave inversion scheme (Kikuchi and Kanamori, 2003). Also, for seismic waveform inversion considering the inhomogeneous internal structure, we inverted strong motion seismograms at K-NET and KiK-net stations, based on 3-D Green's functions (Fujihara et al., 2013a; Fujihara et al., 2013b). The gross "temporal" nature of our seismic source models are generally consistent with the other seismic source models (e.g., Yoshida et al

  7. Seismo-Acoustic Generation by Earthquakes and Explosions and Near-Regional Propagation

    DTIC Science & Technology

    2009-09-30

    earthquakes generate infrasound . Three infrasonic arrays in Utah (BGU, EPU, and NOQ), one in Nevada (NVIAR), and one in Wyoming (PDIAR) recorded...Katz, and C. Hayward (2009b). The F-detector Revisited: An Improved Strategy for Signal Detection at Seismic and Infrasound Arrays , Bull. Seism. Soc...sources. RESEARCH ACCOMPLISHED Infrasound Observations of the Wells Earthquake Most studies documenting earthquake - generated infrasound are based

  8. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  9. Sediment gravity flows triggered by remotely generated earthquake waves

    NASA Astrophysics Data System (ADS)

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan L.; Salmi, Marie S.

    2017-06-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011-2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  10. Sediment gravity flows triggered by remotely generated earthquake waves

    USGS Publications Warehouse

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan; Salmi, Marie

    2017-01-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011–2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  11. Earthquake-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Kamogawa, Masashi

    an ionospheric phenomenon attributed to tsunami, termed tsunamigenic ionospheric hole (TIH) [Kakinami and Kamogwa et al., GRL, 2012]. After the TEC depression accompanying a monoperiodic variation with approximately 4-minute period as an acoustic resonance between the ionosphere and the solid earth, the TIH gradually recovered. In addition, geomagnetic pulsations with the periods of 150, 180 and 210 seconds were observed on the ground in Japan approximately 5 minutes after the mainshock. Since the variation with the period of 180 seconds was simultaneously detected at the magnetic conjugate of points of Japan, namely Australia, field aligned currents along the magnetic field line were excited. The field aligned currents might be excited due to E and F region dynamo current caused by acoustic waves originating from the tsunami. This result implies that a large earthquake generates seismogenic field aligned currents. Furthermore, monoperiodical geomagnetic oscillation pointing to the epicenter of which velocity corresponds to Rayleigh waves occurs. This may occur due to seismogenic arc-current in E region. Removing such magnetic oscillations from the observed data, clear tsunami dynamo effect was found. This result implies that a large EQ generates seismogenic field aligned currents, seismogenic arc-current and tsunami dynamo current which disturb geomagnetic field. Thus, we found the complex coupling process between a large EQ and an ionosphere from the results of Tohoku EQ.

  12. Role of H2O in Generating Subduction Zone Earthquakes

    NASA Astrophysics Data System (ADS)

    Hasegawa, A.

    2017-03-01

    A dense nationwide seismic network and high seismic activity in Japan have provided a large volume of high-quality data, enabling high-resolution imaging of the seismic structures defining the Japanese subduction zones. Here, the role of H2O in generating earthquakes in subduction zones is discussed based mainly on recent seismic studies in Japan using these high-quality data. Locations of intermediate-depth intraslab earthquakes and seismic velocity and attenuation structures within the subducted slab provide evidence that strongly supports intermediate-depth intraslab earthquakes, although the details leading to the earthquake rupture are still poorly understood. Coseismic rotations of the principal stress axes observed after great megathrust earthquakes demonstrate that the plate interface is very weak, which is probably caused by overpressured fluids. Detailed tomographic imaging of the seismic velocity structure in and around plate boundary zones suggests that interplate coupling is affected by local fluid overpressure. Seismic tomography studies also show the presence of inclined sheet-like seismic low-velocity, high-attenuation zones in the mantle wedge. These may correspond to the upwelling flow portion of subduction-induced secondary convection in the mantle wedge. The upwelling flows reach the arc Moho directly beneath the volcanic areas, suggesting a direct relationship. H2O originally liberated from the subducted slab is transported by this upwelling flow to the arc crust. The H2O that reaches the crust is overpressured above hydrostatic values, weakening the surrounding crustal rocks and decreasing the shear strength of faults, thereby inducing shallow inland earthquakes. These observations suggest that H2O expelled from the subducting slab plays an important role in generating subduction zone earthquakes both within the subduction zone itself and within the magmatic arc occupying its hanging wall.

  13. Discovering Coseismic Traveling Ionospheric Disturbances Generated by the 2016 Kaikoura Earthquake

    NASA Astrophysics Data System (ADS)

    Li, J. D.; Rude, C. M.; Gowanlock, M.; Pankratius, V.

    2017-12-01

    Geophysical events and hazards, such as earthquakes, tsunamis, and volcanoes, have been shown to generate traveling ionospheric disturbances (TIDs). These disturbances can be measured by means of Total Electron Content fluctuations obtained from a network of multifrequency GPS receivers in the MIT Haystack Observatory Madrigal database. Analyzing the response of the ionosphere to such hazards enhances our understanding of natural phenomena and augments our large-scale monitoring capabilities in conjunction with other ground-based sensors. However, it is currently challenging for human investigators to spot and characterize such signatures, or whether a geophysical event has actually occurred, because the ionosphere can be noisy with multiple simultaneous phenomena taking place at the same time. This work therefore explores a systematic pipeline for the ex-post discovery and characterization of TIDs. Our technique starts by geolocating the event and gathering the corresponding data, then checks for potentially conflicting TID sources, and processes the raw total electron content data to generate differential measurements. A Kolmogorov-Smirnov test is applied to evaluate the statistical significance of detected deviations in the differential measurements. We present results from our successful application of this pipeline to the 2016 7.8 Mw Kaikoura earthquake occurring in New Zealand on November 13th. We detect a coseismic TID occurring 8 minutes after the earthquake and propagating towards the equator at 1050 m/s, with a 0.22 peak-to-peak TECu amplitude. Furthermore, the observed waveform exhibits more complex behavior than the expected N-wave for a coseismic TID, which potentially results from the complex multi-fault structure of the earthquake. We acknowledge support from NSF ACI1442997 (PI Pankratius), NASA AISTNNX15AG84G (PI Pankratius), and NSF AGS-1343967 (PI Pankratius), and NSF AGS-1242204 (PI Erickson).

  14. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    NASA Astrophysics Data System (ADS)

    Haddad, David Elias

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that nearly half of Earth's human population lives along active fault zones, a quantitative understanding of the mechanics of earthquakes and faulting is necessary to build accurate earthquake forecasts. My research relies on the quantitative documentation of the geomorphic expression of large earthquakes and the physical processes that control their spatiotemporal distributions. The first part of my research uses high-resolution topographic lidar data to quantitatively document the geomorphic expression of historic and prehistoric large earthquakes. Lidar data allow for enhanced visualization and reconstruction of structures and stratigraphy exposed by paleoseismic trenches. Lidar surveys of fault scarps formed by the 1992 Landers earthquake document the centimeter-scale erosional landforms developed by repeated winter storm-driven erosion. The second part of my research employs a quasi-static numerical earthquake simulator to explore the effects of fault roughness, friction, and structural complexities on earthquake-generated deformation. My experiments show that fault roughness plays a critical role in determining fault-to-fault rupture jumping probabilities. These results corroborate the accepted 3-5 km rupture jumping distance for smooth faults. However, my simulations show that the rupture jumping threshold distance is highly variable for rough faults due to heterogeneous elastic strain energies. Furthermore, fault roughness controls spatiotemporal variations in slip rates such that rough faults exhibit lower slip rates relative to their smooth counterparts. The central implication of these results lies in guiding the

  15. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  16. Attenuation of Slab determined from T-wave generation by deep earthquakes

    NASA Astrophysics Data System (ADS)

    Huang, J.; Ni, S.

    2006-05-01

    T-wave are seismically generated acoustic waves that propagate over great distance in the ocean sound channel (SOFAR). Because of the high attenuation in both the upper mantle and the ocean crust, T wave is rarely observed for earthquakes deeper than 80 km. However some deep earthquakes deeper than 80km indeed generate apparent T-waves if the subducted slab is continuous Okal et al. (1997) . We studied the deep earthquakes in the Fiji/Tonga region, where the subducted lithosphere is old and thus with small attenuation. After analyzing 33 earthquakes with the depth from 10 Km to 650 Km in Fiji/Tonga, we observed and modeled obvious T-phases from these earthquakes observed at station RAR. We used the T-wave generated by deep earthquakes to compute the quality factor of the Fiji/Tonga slab. The method used in this study is followed the equation (1) by [Groot-Hedlin et al,2001][1]. A=A0/(1+(Ω0/Ω)2)×exp(-LΩ/Qv)×Ωn where the A is the amplitude computed by the practicable data, amplitude depending on the earthquakes, and A0 is the inherent frequency related with the earthquake's half duration, L is the length of ray path that P wave or S travel in the slab, and the V is the velocity of P-wave. In this study, we fix the n=2, by assuming the T- wave scattering points in the Fiji/Tonga island arc having the same attribution as the continental shelf. After some computing and careful analysis, we determined the quality factor of the Fiji/Tonga to be around 1000, Such result is consistent with results from the traditional P,S-wave data[Roth & Wiens,1999][2] . Okal et al. (1997) pointed out that the slab in the part of central South America was also a continuous slab, by modeling apparent T-waves from the great 1994 Bolivian deep earthquake in relation to channeling of S wave energy propagating upward through the slab[3]. [1]Catherine D. de Groot-Hedlin, John A. Orcutt, excitation of T-phases by seafloor scattering, J. Acoust. Soc, 109,1944-1954,2001. [2]Erich G.Roth and

  17. Strong Ground Motion Generation during the 2011 Tohoku-Oki Earthquake

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2011-12-01

    Strong ground motions during the 2011 Tohoku-Oki earthquake (Mw9.0) were densely observed by the strong motion observation networks all over Japan. Seeing the acceleration and velocity waveforms observed at strong stations in northeast Japan along the source region, those ground motions are characterized by plural wave packets with duration of about twenty seconds. Particularly, two wave packets separated by about fifty seconds could be found on the records in the northern part of the damaged area, whereas only one significant wave packets could be recognized on the records in the southern part of the damaged area. The record section shows four isolated wave packets propagating from different locations to north and south, and it gives us a hint of the strong motion generation process on the source fault which is related to the heterogeneous rupture process in the scale of tens of kilometers. In order to solve it, we assume that each isolated wave packet is contributed by the corresponding strong motion generation area (SMGA). It is a source patch whose slip velocity is larger than off the area (Miyake et al., 2003). That is, the source model of the 2011 Tohoku-Oki earthquake consists of four SMGAs. The SMGA source model has succeeded in reproducing broadband strong ground motions for past subduction-zone events (e.g., Suzuki and Iwata, 2007). The target frequency range is set to be 0.1-10 Hz in this study as this range is significantly related to seismic damage generation to general man-made structures. First, we identified the rupture starting points of each SMGA by picking up the onset of individual packets. The source fault plane is set following the GCMT solution. The first two SMGAs were located approximately 70 km and 30 km west of the hypocenter. The third and forth SMGAs were located approximately 160 km and 230 km southwest of the hypocenter. Then, the model parameters (size, rise time, stress drop, rupture velocity, rupture propagation pattern) of these

  18. An interdisciplinary approach to study Pre-Earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  19. Frequency-Dependent Rupture Processes for the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Miyake, H.

    2012-12-01

    The 2011 Tohoku earthquake is characterized by frequency-dependent rupture process [e.g., Ide et al., 2011; Wang and Mori, 2011; Yao et al., 2011]. For understanding rupture dynamics of this earthquake, it is extremely important to investigate wave-based source inversions for various frequency bands. The above frequency-dependent characteristics have been derived from teleseismic analyses. This study challenges to infer frequency-dependent rupture processes from strong motion waveforms of K-NET and KiK-net stations. The observations suggested three or more S-wave phases, and ground velocities at several near-source stations showed different arrivals of their long- and short-period components. We performed complex source spectral inversions with frequency-dependent phase weighting developed by Miyake et al. [2002]. The technique idealizes both the coherent and stochastic summation of waveforms using empirical Green's functions. Due to the limitation of signal-to-noise ratio of the empirical Green's functions, the analyzed frequency bands were set within 0.05-10 Hz. We assumed a fault plane with 480 km in length by 180 km in width with a single time window for rupture following Koketsu et al. [2011] and Asano and Iwata [2012]. The inversion revealed source ruptures expanding from the hypocenter, and generated sharp slip-velocity intensities at the down-dip edge. In addition to test the effects of empirical/hybrid Green's functions and with/without rupture front constraints on the inverted solutions, we will discuss distributions of slip-velocity intensity and a progression of wave generation with increasing frequency.

  20. Monitoring the Earthquake source process in North America

    USGS Publications Warehouse

    Herrmann, Robert B.; Benz, H.; Ammon, C.J.

    2011-01-01

    With the implementation of the USGS National Earthquake Information Center Prompt Assessment of Global Earthquakes for Response system (PAGER), rapid determination of earthquake moment magnitude is essential, especially for earthquakes that are felt within the contiguous United States. We report an implementation of moment tensor processing for application to broad, seismically active areas of North America. This effort focuses on the selection of regional crustal velocity models, codification of data quality tests, and the development of procedures for rapid computation of the seismic moment tensor. We systematically apply these techniques to earthquakes with reported magnitude greater than 3.5 in continental North America that are not associated with a tectonic plate boundary. Using the 0.02-0.10 Hz passband, we can usually determine, with few exceptions, moment tensor solutions for earthquakes with M w as small as 3.7. The threshold is significantly influenced by the density of stations, the location of the earthquake relative to the seismic stations and, of course, the signal-to-noise ratio. With the existing permanent broadband stations in North America operated for rapid earthquake response, the seismic moment tensor of most earthquakes that are M w 4 or larger can be routinely computed. As expected the nonuniform spatial pattern of these solutions reflects the seismicity pattern. However, the orientation of the direction of maximum compressive stress and the predominant style of faulting is spatially coherent across large regions of the continent.

  1. Earthquake Forecasting Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B. M.; Nava Pichardo, F. A.; Glowacka, E.; Gomez-Trevino, E.

    2015-12-01

    Large earthquakes have semi-periodic behavior as result of critically self-organized processes of stress accumulation and release in some seismogenic region. Thus, large earthquakes in a region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. Nava et al., 2013 and Quinteros et al., 2013 realized that not all earthquakes in a given region need belong to the same sequence, since there can be more than one process of stress accumulation and release in it; they also proposed a method to identify semi-periodic sequences through analytic Fourier analysis. This work presents improvements on the above-mentioned method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification, which means that earthquake occurrence times are treated as a labeled point process; the estimation of appropriate upper limit uncertainties to use in forecasts; and the use of Bayesian analysis to evaluate the forecast performance. This improved method is applied to specific regions: the southwestern coast of Mexico, the northeastern Japan Arc, the San Andreas Fault zone at Parkfield, and northeastern Venezuela.

  2. Rupture processes of the 2010 Canterbury earthquake and the 2011 Christchurch earthquake inferred from InSAR, strong motion and teleseismic datasets

    NASA Astrophysics Data System (ADS)

    Yun, S.; Koketsu, K.; Aoki, Y.

    2014-12-01

    The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity

  3. Chapter F. The Loma Prieta, California, Earthquake of October 17, 1989 - Tectonic Processes and Models

    USGS Publications Warehouse

    Simpson, Robert W.

    1994-01-01

    If there is a single theme that unifies the diverse papers in this chapter, it is the attempt to understand the role of the Loma Prieta earthquake in the context of the earthquake 'machine' in northern California: as the latest event in a long history of shocks in the San Francisco Bay region, as an incremental contributor to the regional deformation pattern, and as a possible harbinger of future large earthquakes. One of the surprises generated by the earthquake was the rather large amount of uplift that occurred as a result of the reverse component of slip on the southwest-dipping fault plane. Preearthquake conventional wisdom had been that large earthquakes in the region would probably be caused by horizontal, right-lateral, strike-slip motion on vertical fault planes. In retrospect, the high topography of the Santa Cruz Mountains and the elevated marine terraces along the coast should have provided some clues. With the observed ocean retreat and the obvious uplift of the coast near Santa Cruz that accompanied the earthquake, Mother Nature was finally caught in the act. Several investigators quickly saw the connection between the earthquake uplift and the long-term evolution of the Santa Cruz Mountains and realized that important insights were to be gained by attempting to quantify the process of crustal deformation in terms of Loma Prieta-type increments of northward transport and fault-normal shortening.

  4. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  5. Characteristics of strong ground motion generation areas by fully dynamic earthquake cycles

    NASA Astrophysics Data System (ADS)

    Galvez, P.; Somerville, P.; Ampuero, J. P.; Petukhin, A.; Yindi, L.

    2016-12-01

    During recent subduction zone earthquakes (2010 Mw 8.8 Maule and 2011 Mw 9.0 Tohoku), high frequency ground motion radiation has been detected in deep regions of seismogenic zones. By semblance analysis of wave packets, Kurahashi & Irikura (2013) found strong ground motion generation areas (SMGAs) located in the down dip region of the 2011 Tohoku rupture. To reproduce the rupture sequence of SMGA's and replicate their rupture time and ground motions, we extended previous work on dynamic rupture simulations with slip reactivation (Galvez et al, 2016). We adjusted stresses on the most southern SMGAs of Kurahashi & Irikura (2013) model to reproduce the observed peak ground velocity recorded at seismic stations along Japan for periods up to 5 seconds. To generate higher frequency ground motions we input the rupture time, final slip and slip velocity of the dynamic model into the stochastic ground motion generator of Graves & Pitarka (2010). Our results are in agreement with the ground motions recorded at the KiK-net and K-NET stations.While we reproduced the recorded ground motions of the 2011 Tohoku event, it is unknown whether the characteristics and location of SMGA's will persist in future large earthquakes in this region. Although the SMGA's have large peak slip velocities, the areas of largest final slip are located elsewhere. To elucidate whether this anti-correlation persists in time, we conducted earthquake cycle simulations and analysed the spatial correlation of peak slip velocities, stress drops and final slip of main events. We also investigated whether or not the SMGA's migrate to other regions of the seismic zone.To perform this study, we coupled the quasi-dynamic boundary element solver QDYN (Luo & Ampuero, 2015) and the dynamic spectral element solver SPECFEM3D (Galvez et al., 2014; 2016). The workflow alternates between inter-seismic periods solved with QDYN and coseismic periods solved with SPECFEM3D, with automated switch based on slip rate

  6. Volcanotectonic earthquakes induced by propagating dikes

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2016-04-01

    Volcanotectonic earthquakes are of high frequency and mostly generated by slip on faults. During chamber expansion/contraction earthquakes are distribution in the chamber roof. Following magma-chamber rupture and dike injection, however, earthquakes tend to concentrate around the dike and follow its propagation path, resulting in an earthquake swarm characterised by a number of earthquakes of similar magnitudes. I distinguish between two basic processes by which propagating dikes induce earthquakes. One is due to stress concentration in the process zone at the tip of the dike, the other relates to stresses induced in the walls and surrounding rocks on either side of the dike. As to the first process, some earthquakes generated at the dike tip are related to pure extension fracturing as the tip advances and the dike-path forms. Formation of pure extension fractures normally induces non-double couple earthquakes. There is also shear fracturing in the process zone, however, particularly normal faulting, which produces double-couple earthquakes. The second process relates primarily to slip on existing fractures in the host rock induced by the driving pressure of the propagating dike. Such pressures easily reach 5-20 MPa and induce compressive and shear stresses in the adjacent host rock, which already contains numerous fractures (mainly joints) of different attitudes. In piles of lava flows or sedimentary beds the original joints are primarily vertical and horizontal. Similarly, the contacts between the layers/beds are originally horizontal. As the layers/beds become buried, the joints and contacts become gradually tilted so that the joints and contacts become oblique to the horizontal compressive stress induced by a driving pressure of the (vertical) dike. Also, most of the hexagonal (or pentagonal) columnar joints in the lava flows are, from the beginning, oblique to an intrusive sheet of any attitude. Consequently, the joints and contacts function as potential shear

  7. Effect of Sediments on Rupture Dynamics of Shallow Subduction Zone Earthquakes and Tsunami Generation

    NASA Astrophysics Data System (ADS)

    Ma, S.

    2011-12-01

    Low-velocity fault zones have long been recognized for crustal earthquakes by using fault-zone trapped waves and geodetic observations on land. However, the most pronounced low-velocity fault zones are probably in the subduction zones where sediments on the seafloor are being continuously subducted. In this study I focus on shallow subduction zone earthquakes; these earthquakes pose a serious threat to human society in their ability in generating large tsunamis. Numerous observations indicate that these earthquakes have unusually long rupture durations, low rupture velocities, and/or small stress drops near the trench. However, the underlying physics is unclear. I will use dynamic rupture simulations with a finite-element method to investigate the dynamic stress evolution on faults induced by both sediments and free surface, and its relations with rupture velocity and slip. I will also explore the effect of off-fault yielding of sediments on the rupture characteristics and seafloor deformation. As shown in Ma and Beroza (2008), the more compliant hanging wall combined with free surface greatly increases the strength drop and slip near the trench. Sediments in the subduction zone likely have a significant role in the rupture dynamics of shallow subduction zone earthquakes and tsunami generation.

  8. Differences in tsunami generation between the December 26, 2004 and March 28, 2005 Sumatra earthquakes

    USGS Publications Warehouse

    Geist, E.L.; Bilek, S.L.; Arcas, D.; Titov, V.V.

    2006-01-01

    Source parameters affecting tsunami generation and propagation for the Mw > 9.0 December 26, 2004 and the Mw = 8.6 March 28, 2005 earthquakes are examined to explain the dramatic difference in tsunami observations. We evaluate both scalar measures (seismic moment, maximum slip, potential energy) and finite-source repre-sentations (distributed slip and far-field beaming from finite source dimensions) of tsunami generation potential. There exists significant variability in local tsunami runup with respect to the most readily available measure, seismic moment. The local tsunami intensity for the December 2004 earthquake is similar to other tsunamigenic earthquakes of comparable magnitude. In contrast, the March 2005 local tsunami was deficient relative to its earthquake magnitude. Tsunami potential energy calculations more accurately reflect the difference in tsunami severity, although these calculations are dependent on knowledge of the slip distribution and therefore difficult to implement in a real-time system. A significant factor affecting tsunami generation unaccounted for in these scalar measures is the location of regions of seafloor displacement relative to the overlying water depth. The deficiency of the March 2005 tsunami seems to be related to concentration of slip in the down-dip part of the rupture zone and the fact that a substantial portion of the vertical displacement field occurred in shallow water or on land. The comparison of the December 2004 and March 2005 Sumatra earthquakes presented in this study is analogous to previous studies comparing the 1952 and 2003 Tokachi-Oki earthquakes and tsunamis, in terms of the effect slip distribution has on local tsunamis. Results from these studies indicate the difficulty in rapidly assessing local tsunami runup from magnitude and epicentral location information alone.

  9. Insights in Low Frequency Earthquake Source Processes from Observations of Their Size-Duration Scaling

    NASA Astrophysics Data System (ADS)

    Farge, G.; Shapiro, N.; Frank, W.; Mercury, N.; Vilotte, J. P.

    2017-12-01

    Low frequency earthquakes (LFE) are detected in association with volcanic and tectonic tremor signals as impulsive, repeated, low frequency (1-5 Hz) events originating from localized sources. While the mechanism causing this depletion of the high frequency content of their signal is still unknown, this feature may indicate that the source processes at the origin of LFE are different from those for regular earthquakes. Tectonic LFE are often associated with slip instabilities in the brittle-ductile transition zones of active faults and volcanic LFE with fluid transport in magmatic and hydrothermal systems. Key constraints on the LFE-generating physical mechanisms can be obtained by establishing scaling laws between their sizes and durations. We apply a simple spectral analysis method to the S-waveforms of each LFE to retrieve its seismic moment and corner frequency. The former characterizes the earthquake's size while the latter is inversely proportional to its duration. First, we analyze a selection of tectonic LFE from the Mexican "Sweet Spot" (Guerrero, Mexico). We find characteristic values of M ˜ 1013 N.m (Mw ˜ 2.6) and fc ˜ 2 Hz. The moment-corner frequency distribution compared to values reported in previous studies in tectonic contexts is consistent with the scaling law suggested by Bostock et al. (2015): fc ˜ M-1/10 . We then apply the same source- parameters determination method to deep volcanic LFE detected in the Klyuchevskoy volcanic group in Kamtchatka, Russia. While the seismic moments for these earthquakes are slightly smaller, they still approximately follow the fc ˜ M-1/10 scaling. This size-duration scaling observed for LFE is very different from the one established for regular earthquakes (fc ˜ M-1/3) and from the scaling more recently suggested by Ide et al. (2007) for the broad class of "slow earthquakes". The scaling observed for LFE suggests that they are generated by sources of nearly constant size with strongly varying intensities

  10. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  11. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  12. Rupture, waves and earthquakes.

    PubMed

    Uenishi, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  13. Rupture, waves and earthquakes

    PubMed Central

    UENISHI, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but “extraordinary” phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable. PMID:28077808

  14. Numerical simulation of faulting in the Sunda Trench shows that seamounts may generate megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Jiao, L.; Chan, C. H.; Tapponnier, P.

    2017-12-01

    The role of seamounts in generating earthquakes has been debated, with some studies suggesting that seamounts could be truncated to generate megathrust events, while other studies indicate that the maximum size of megathrust earthquakes could be reduced as subducting seamounts could lead to segmentation. The debate is highly relevant for the seamounts discovered along the Mentawai patch of the Sunda Trench, where previous studies have suggested that a megathrust earthquake will likely occur within decades. In order to model the dynamic behavior of the Mentawai patch, we simulated forearc faulting caused by seamount subducting using the Discrete Element Method. Our models show that rupture behavior in the subduction system is dominated by stiffness of the overriding plate. When stiffness is low, a seamount can be a barrier to rupture propagation, resulting in several smaller (M≤8.0) events. If, however, stiffness is high, a seamount can cause a megathrust earthquake (M8 class). In addition, we show that a splay fault in the subduction environment could only develop when a seamount is present, and a larger offset along a splay fault is expected when stiffness of the overriding plate is higher. Our dynamic models are not only consistent with previous findings from seismic profiles and earthquake activities, but the models also better constrain the rupture behavior of the Mentawai patch, thus contributing to subsequent seismic hazard assessment.

  15. Sibling earthquakes generated within a persistent rupture barrier on the Sunda megathrust under Simeulue Island

    NASA Astrophysics Data System (ADS)

    Morgan, Paul M.; Feng, Lujia; Meltzner, Aron J.; Lindsey, Eric O.; Tsang, Louisa L. H.; Hill, Emma M.

    2017-03-01

    A section of the Sunda megathrust underneath Simeulue is known to persistently halt rupture propagation of great earthquakes, including those in 2004 (Mw 9.2) and 2005 (Mw 8.6). Yet the same section generated large earthquakes in 2002 (Mw 7.3) and 2008 (Mw 7.4). To date, few studies have investigated the 2002 and 2008 events, and none have satisfactorily located or explained them. Using near-field InSAR, GPS, and coral geodetic data, we find that the slip distributions of the two events are not identical but do show a close resemblance and largely overlap. We thus consider these earthquakes "siblings" that were generated by an anomalous "parent" feature of the megathrust. We suggest that this parent feature is a locked asperity surrounded by the otherwise partially creeping Simeulue section, perhaps structurally controlled by a broad morphological high on the megathrust.

  16. Temporal and spatial heterogeneity of rupture process application in shakemaps of Yushu Ms7.1 earthquake, China

    NASA Astrophysics Data System (ADS)

    Kun, C.

    2015-12-01

    Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.

  17. Identification of earthquakes that generate tsunamis in Java and Nusa Tenggara using rupture duration analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pribadi, S., E-mail: sugengpribadimsc@gmail.com; Puspito, N. T.; Yudistira, T.

    Java and Nusa Tenggara are the tectonically active of Sunda arc. This study discuss the rupture duration as a manifestation of the power of earthquake-generated tsunami. We use the teleseismic (30° - 90°) body waves with high-frequency energy Seismometer is from IRIS network as amount 206 broadband units. We applied the Butterworth high bandpass (1 - 2 Hz) filtered. The arrival and travel times started from wave phase of P - PP which based on Jeffrey Bullens table with TauP program. The results are that the June 2, 1994 Banyuwangi and the July 17, 2006 Pangandaran earthquakes identified as tsunamimore » earthquakes with long rupture duration (To > 100 second), medium magnitude (7.6 < Mw < 7.9) and located near the trench. The others are 4 tsunamigenic earthquakes and 3 inland earthquakes with short rupture duration start from To > 50 second which depend on its magnitude. Those events are located far from the trench.« less

  18. The Physics of Earthquakes: In the Quest for a Unified Theory (or Model) That Quantitatively Describes the Entire Process of an Earthquake Rupture, From its Nucleation to the Dynamic Regime and to its Arrest

    NASA Astrophysics Data System (ADS)

    Ohnaka, M.

    2004-12-01

    For the past four decades, great progress has been made in understanding earthquake source processes. In particular, recent progress in the field of the physics of earthquakes has contributed substantially to unraveling the earthquake generation process in quantitative terms. Yet, a fundamental problem remains unresolved in this field. The constitutive law that governs the behavior of earthquake ruptures is the basis of earthquake physics, and the governing law plays a fundamental role in accounting for the entire process of an earthquake rupture, from its nucleation to the dynamic propagation to its arrest, quantitatively in a unified and consistent manner. Therefore, without establishing the rational constitutive law, the physics of earthquakes cannot be a quantitative science in a true sense, and hence it is urgent to establish the rational constitutive law. However, it has been controversial over the past two decades, and it is still controversial, what the constitutive law for earthquake ruptures ought to be, and how it should be formulated. To resolve the controversy is a necessary step towards a more complete, unified theory of earthquake physics, and now the time is ripe to do so. Because of its fundamental importance, we have to discuss thoroughly and rigorously what the constitutive law ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid evidence. There are prerequisites for the constitutive formulation. The brittle, seismogenic layer and individual faults therein are characterized by inhomogeneity, and fault inhomogeneity has profound implications for earthquake ruptures. In addition, rupture phenomena including earthquakes are inherently scale dependent; indeed, some of the physical quantities inherent in rupture exhibit scale dependence. To treat scale-dependent physical quantities inherent in the rupture over a broad scale range quantitatively in a unified and consistent manner, it is critical to

  19. Physical bases of the generation of short-term earthquake precursors: A complex model of ionization-induced geophysical processes in the lithosphere-atmosphere-ionosphere-magnetosphere system

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.

    2015-07-01

    This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.

  20. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  1. Characterization of intermittency in renewal processes: Application to earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  2. Pre-earthquake magnetic pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J.; Freund, F.

    2015-08-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earthquakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  3. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  4. Frictional melt generated by the 2008 Mw 7.9 Wenchuan earthquake and its faulting mechanisms

    NASA Astrophysics Data System (ADS)

    Wang, H.; Li, H.; Si, J.; Sun, Z.; Zhang, L.; He, X.

    2017-12-01

    Fault-related pseudotachylytes are considered as fossil earthquakes, conveying significant information that provide improved insight into fault behaviors and their mechanical properties. The WFSD project was carried out right after the 2008 Wenchuan earthquake, detailed research was conducted in the drilling cores. 2 mm rigid black layer with fresh slickenlines was observed at 732.6 m in WFSD-1 cores drilled at the southern Yingxiu-Beichuan fault (YBF). Evidence of optical microscopy, FESEM and FIB-TEM show it's frictional melt (pseudotachylyte). In the northern part of YBF, 4 mm fresh melt was found at 1084 m with similar structures in WFSD-4S cores. The melts contain numerous microcracks. Considering that (1) the highly unstable property of the frictional melt (easily be altered or devitrified) under geological conditions; (2) the unfilled microcracks; (3) fresh slickenlines and (4) recent large earthquake in this area, we believe that 2-4 mm melt was produced by the 2008 Wenchuan earthquake. This is the first report of fresh pseudotachylyte with slickenlines in natural fault that generated by modern earthquake. Geochemical analyses show that fault rocks at 732.6 m are enriched in CaO, Fe2O3, FeO, H2O+ and LOI, whereas depleted in SiO2. XRF results show that Ca and Fe are enriched obviously in the 2.5 cm fine-grained fault rocks and Ba enriched in the slip surface. The melt has a higher magnetic susceptibility value, which may due to neoformed magnetite and metallic iron formed in fault frictional melt. Frictional melt visible in both southern and northern part of YBF reveals that frictional melt lubrication played a major role in the Wenchuan earthquake. Instead of vesicles and microlites, numerous randomly oriented microcracks in the melt, exhibiting a quenching texture. The quenching texture suggests the frictional melt was generated under rapid heat-dissipation condition, implying vigorous fluid circulation during the earthquake. We surmise that during

  5. Infrasonic waves in the ionosphere generated by a weak earthquake

    NASA Astrophysics Data System (ADS)

    Krasnov, V. M.; Drobzheva, Ya. V.; Chum, J.

    2011-08-01

    A computer code has been developed to simulate the generation of infrasonic waves (frequencies considered ≤80 Hz) by a weak earthquake (magnitude ˜3.6), their propagation through the atmosphere and their effects in the ionosphere. We provide estimates of the perturbations in the ionosphere at the height (˜160 km) where waves at the sounding frequency (3.59 MHz) of a continuous Doppler radar reflect. We have found that the pressure perturbation is 5.79×10-7 Pa (0.26% of the ambient value), the temperature perturbation is 0.088 K (0.015% of the ambient value) and the electron density perturbation is 2×108 m-3 (0.12% of the ambient value). The characteristic perturbation is found to be a bipolar pulse lasting ˜25 s, and the maximum Doppler shift is found to be ˜0.08 Hz, which is too small to be detected by the Doppler radar at the time of the earthquake.

  6. Nonlinear ionospheric responses to large-amplitude infrasonic-acoustic waves generated by undersea earthquakes

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-02-01

    Numerical models of ionospheric coupling with the neutral atmosphere are used to investigate perturbations of plasma density, vertically integrated total electron content (TEC), neutral velocity, and neutral temperature associated with large-amplitude acoustic waves generated by the initial ocean surface displacements from strong undersea earthquakes. A simplified source model for the 2011 Tohoku earthquake is constructed from estimates of initial ocean surface responses to approximate the vertical motions over realistic spatial and temporal scales. Resulting TEC perturbations from modeling case studies appear consistent with observational data, reproducing pronounced TEC depletions which are shown to be a consequence of the impacts of nonlinear, dissipating acoustic waves. Thermospheric acoustic compressional velocities are ˜±250-300 m/s, superposed with downward flows of similar amplitudes, and temperature perturbations are ˜300 K, while the dominant wave periodicity in the thermosphere is ˜3-4 min. Results capture acoustic wave processes including reflection, onset of resonance, and nonlinear steepening and dissipation—ultimately leading to the formation of ionospheric TEC depletions "holes"—that are consistent with reported observations. Three additional simulations illustrate the dependence of atmospheric acoustic wave and subsequent ionospheric responses on the surface displacement amplitude, which is varied from the Tohoku case study by factors of 1/100, 1/10, and 2. Collectively, results suggest that TEC depletions may only accompany very-large amplitude thermospheric acoustic waves necessary to induce a nonlinear response, here with saturated compressional velocities ˜200-250 m/s generated by sea surface displacements exceeding ˜1 m occurring over a 3 min time period.

  7. Southern California Earthquake Center/Undergraduate Studies in Earthquake Information Technology (SCEC/UseIT): Towards the Next Generation of Internship

    NASA Astrophysics Data System (ADS)

    Perry, S.; Benthien, M.; Jordan, T. H.

    2005-12-01

    The SCEC/UseIT internship program is training the next generation of earthquake scientist, with methods that can be adapted to other disciplines. UseIT interns work collaboratively, in multi-disciplinary teams, conducting computer science research that is needed by earthquake scientists. Since 2002, the UseIT program has welcomed 64 students, in some two dozen majors, at all class levels, from schools around the nation. Each summer''s work is posed as a ``Grand Challenge.'' The students then organize themselves into project teams, decide how to proceed, and pool their diverse talents and backgrounds. They have traditional mentors, who provide advice and encouragement, but they also mentor one another, and this has proved to be a powerful relationship. Most begin with fear that their Grand Challenge is impossible, and end with excitement and pride about what they have accomplished. The 22 UseIT interns in summer, 2005, were primarily computer science and engineering majors, with others in geology, mathematics, English, digital media design, physics, history, and cinema. The 2005 Grand Challenge was to "build an earthquake monitoring system" to aid scientists who must visualize rapidly evolving earthquake sequences and convey information to emergency personnel and the public. Most UseIT interns were engaged in software engineering, bringing new datasets and functionality to SCEC-VDO (Virtual Display of Objects), a 3D visualization software that was prototyped by interns last year, using Java3D and an extensible, plug-in architecture based on the Eclipse Integrated Development Environment. Other UseIT interns used SCEC-VDO to make animated movies, and experimented with imagery in order to communicate concepts and events in earthquake science. One movie-making project included the creation of an assessment to test the effectiveness of the movie''s educational message. Finally, one intern created an interactive, multimedia presentation of the UseIT program.

  8. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

  9. Thermal Radiation Anomalies Associated with Major Earthquakes

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Pulinets, Sergey; Kafatos, Menas C.; Taylor, Patrick

    2017-01-01

    Recent developments of remote sensing methods for Earth satellite data analysis contribute to our understanding of earthquake related thermal anomalies. It was realized that the thermal heat fluxes over areas of earthquake preparation is a result of air ionization by radon (and other gases) and consequent water vapor condensation on newly formed ions. Latent heat (LH) is released as a result of this process and leads to the formation of local thermal radiation anomalies (TRA) known as OLR (outgoing Longwave radiation, Ouzounov et al, 2007). We compare the LH energy, obtained by integrating surface latent heat flux (SLHF) over the area and time with released energies associated with these events. Extended studies of the TRA using the data from the most recent major earthquakes allowed establishing the main morphological features. It was also established that the TRA are the part of more complex chain of the short-term pre-earthquake generation, which is explained within the framework of a lithosphere-atmosphere coupling processes.

  10. A rare moderate‐sized (Mw 4.9) earthquake in Kansas: Rupture process of the Milan, Kansas, earthquake of 12 November 2014 and its relationship to fluid injection

    USGS Publications Warehouse

    Choy, George; Rubinstein, Justin L.; Yeck, William; McNamara, Daniel E.; Mueller, Charles; Boyd, Oliver

    2016-01-01

    The largest recorded earthquake in Kansas occurred northeast of Milan on 12 November 2014 (Mw 4.9) in a region previously devoid of significant seismic activity. Applying multistation processing to data from local stations, we are able to detail the rupture process and rupture geometry of the mainshock, identify the causative fault plane, and delineate the expansion and extent of the subsequent seismic activity. The earthquake followed rapid increases of fluid injection by multiple wastewater injection wells in the vicinity of the fault. The source parameters and behavior of the Milan earthquake and foreshock–aftershock sequence are similar to characteristics of other earthquakes induced by wastewater injection into permeable formations overlying crystalline basement. This earthquake also provides an opportunity to test the empirical relation that uses felt area to estimate moment magnitude for historical earthquakes for Kansas.

  11. Using SW4 for 3D Simulations of Earthquake Strong Ground Motions: Application to Near-Field Strong Motion, Building Response, Basin Edge Generated Waves and Earthquakes in the San Francisco Bay Are

    NASA Astrophysics Data System (ADS)

    Rodgers, A. J.; Pitarka, A.; Petersson, N. A.; Sjogreen, B.; McCallen, D.; Miah, M.

    2016-12-01

    Simulation of earthquake ground motions is becoming more widely used due to improvements of numerical methods, development of ever more efficient computer programs (codes), and growth in and access to High-Performance Computing (HPC). We report on how SW4 can be used for accurate and efficient simulations of earthquake strong motions. SW4 is an anelastic finite difference code based on a fourth order summation-by-parts displacement formulation. It is parallelized and can run on one or many processors. SW4 has many desirable features for seismic strong motion simulation: incorporation of surface topography; automatic mesh generation; mesh refinement; attenuation and supergrid boundary conditions. It also has several ways to introduce 3D models and sources (including Standard Rupture Format for extended sources). We are using SW4 to simulate strong ground motions for several applications. We are performing parametric studies of near-fault motions from moderate earthquakes to investigate basin edge generated waves and large earthquakes to provide motions to engineers study building response. We show that 3D propagation near basin edges can generate significant amplifications relative to 1D analysis. SW4 is also being used to model earthquakes in the San Francisco Bay Area. This includes modeling moderate (M3.5-5) events to evaluate the United States Geologic Survey's 3D model of regional structure as well as strong motions from the 2014 South Napa earthquake and possible large scenario events. Recently SW4 was built on a Commodity Technology Systems-1 (CTS-1) at LLNL, new systems for capacity computing at the DOE National Labs. We find SW4 scales well and runs faster on these systems compared to the previous generation of LINUX clusters.

  12. The rupture process of the Manjil, Iran earthquake of 20 june 1990 and implications for intraplate strike-slip earthquakes

    USGS Publications Warehouse

    Choy, G.L.; Zednik, J.

    1997-01-01

    In terms of seismically radiated energy or moment release, the earthquake of 20 January 1990 in the Manjil Basin-Alborz Mountain region of Iran is the second largest strike-slip earthquake to have occurred in an intracontinental setting in the past decade. It caused enormous loss of life and the virtual destruction of several cities. Despite a very large meizoseismal area, the identification of the causative faults has been hampered by the lack of reliable earthquake locations and conflicting field reports of surface displacement. Using broadband data from global networks of digitally recording seismographs, we analyse broadband seismic waveforms to derive characteristics of the rupture process. Complexities in waveforms generated by the earthquake indicate that the main shock consisted of a tiny precursory subevent followed in the next 20 seconds by a series of four major subevents with depths ranging from 10 to 15 km. The focal mechanisms of the major subevents, which are predominantly strike-slip, have a common nodal plane striking about 285??-295??. Based on the coincidence of this strike with the dominant tectonic fabric of the region we presume that the EW striking planes are the fault planes. The first major subevent nucleated slightly south of the initial precursor. The second subevent occurred northwest of the initial precursor. The last two subevents moved progressively southeastward of the first subevent in a direction collinear with the predominant strike of the fault planes. The offsets in the relative locations and the temporal delays of the rupture subevents indicate heterogeneous distribution of fracture strength and the involvement of multiple faults. The spatial distribution of teleseismic aftershocks, which at first appears uncorrelated with meizoseismal contours, can be decomposed into stages. The initial activity, being within and on the periphery of the rupture zone, correlates in shape and length with meizoseismal lines. In the second stage

  13. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  14. Earthquake chemical precursors in groundwater: a review

    NASA Astrophysics Data System (ADS)

    Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.

    2018-03-01

    We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

  15. Source processes of strong earthquakes in the North Tien-Shan region

    NASA Astrophysics Data System (ADS)

    Kulikova, G.; Krueger, F.

    2013-12-01

    Tien-Shan region attracts attention of scientists worldwide due to its complexity and tectonic uniqueness. A series of very strong destructive earthquakes occurred in Tien-Shan at the turn of XIX and XX centuries. Such large intraplate earthquakes are rare in seismology, which increases the interest in the Tien-Shan region. The presented study focuses on the source processes of large earthquakes in Tien-Shan. The amount of seismic data is limited for those early times. In 1889, when a major earthquake has occurred in Tien-Shan, seismic instruments were installed in very few locations in the world and these analog records did not survive till nowadays. Although around a hundred seismic stations were operating at the beginning of XIX century worldwide, it is not always possible to get high quality analog seismograms. Digitizing seismograms is a very important step in the work with analog seismic records. While working with historical seismic records one has to take into account all the aspects and uncertainties of manual digitizing and the lack of accurate timing and instrument characteristics. In this study, we develop an easy-to-handle and fast digitization program on the basis of already existing software which allows to speed up digitizing process and to account for all the recoding system uncertainties. Owing to the lack of absolute timing for the historical earthquakes (due to the absence of a universal clock at that time), we used time differences between P and S phases to relocate the earthquakes in North Tien-Shan and the body-wave amplitudes to estimate their magnitudes. Combining our results with geological data, five earthquakes in North Tien-Shan were precisely relocated. The digitizing of records can introduce steps into the seismograms which makes restitution (removal of instrument response) undesirable. To avoid the restitution, we simulated historic seismograph recordings with given values for damping and free period of the respective instrument and

  16. Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Hansen, R. A.

    2007-05-01

    The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.

  17. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  18. Poro-elastic Rebound Along the Landers 1992 Earthquake Surface Rupture

    NASA Technical Reports Server (NTRS)

    Peltzer, G.; Rosen, P.; Rogez, F.; Hudnut, K.

    1998-01-01

    Maps of post-seismic surface displacement after the 1992, Landers, California earthquake, generated by interferometric processing of ERS-1 Synthetic Aperture Radar (SAR) images, reveal effects of various deformation processes near the 1992 surface rupture.

  19. Pre-earthquake Magnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Heraud, J. A.; Freund, F. T.

    2015-12-01

    A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are suspected to be generated deep in the Earth's crust, in and around the hypocentral volume, days or even weeks before earth quakes. Their extremely long wavelength allows them to pass through kilometers of rock. Interestingly, when the sources of these pulses are triangulated, the locations coincide with the epicenters of future earthquakes. We couple a drift-diffusion semiconductor model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.

  20. Impact of Earthquake Preperation Process On Hydrodeformation Field Evolution In The Caucasus

    NASA Astrophysics Data System (ADS)

    Melikadze, G.; Aliev, A.; Bendukidze, G.; Biagi, P. F.; Garalov, B.; Mirianashvili, V.

    The paper studies relation between geodeformation regime variations of underground water observed in boreholes and deformation processes in the Earth crust, asso- ciated with formation of earthquakes with M=3 and higher. Monitoring of hydro- geodeformation field (HGDF) has been carried out thanks to the on-purpose gen- eral network of Armenia, Azerbaijan, Georgia and Russia. The wells are uniformly distributed throughout the Caucasus and cover all principal geological blocks of the region. The paper deals with results associated with several earthquakes occured in Georgia and one in Azerbaijan. As the network comprises boreholes of different depths, varying from 250 m down to 3,500 m, preliminary calibration of the boreholes involved was carried out, based on evaluation of the water level variation due to known Earth tide effect. This was necessary for sensitivity evaluation and normalization of hydro-dynamic signals. Obtained data have been processed by means of spectral anal- ysis to dissect background field of disturbances from the valid signal. The processed data covered the period of 1991-1993 comprising the following 4 strong earthquakes of the Caucasus, namely in: Racha (1991, M=6.9), Java (1991, M=6.2), Barisakho (1992, M=6.5) and Talish (1993, M=5.6). Formation of the compression zone in the east Caucasus and that of extension in the western Georgia and north Caucasus was observed 7 months prior to Racha quake. Boundary between the above 2 zones passed along the known submeridional fault. The area where maximal gradient was observed, coincided with the joint of deep faults and appeared to be the place for origination of the earthquake. After the quake occurred, the zone of maximal gradient started to mi- grate towards East and residual deformations in HGDF have outlined source first of Java quake (on 15.06.1991), than that of Barisakho (on 23.10.1992) and Talish (on 2.10.1993) ones. Thus, HGDF indicated migration of the deformation field along the slope of

  1. The Implications of Strike-Slip Earthquake Source Properties on the Transform Boundary Development Process

    NASA Astrophysics Data System (ADS)

    Neely, J. S.; Huang, Y.; Furlong, K.

    2017-12-01

    Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in

  2. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  3. Redefining Earthquakes and the Earthquake Machine

    ERIC Educational Resources Information Center

    Hubenthal, Michael; Braile, Larry; Taber, John

    2008-01-01

    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  4. Application of GPS Technologies to study Pre-earthquake processes. A review and future prospects

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Liu, J. Y. G.; Ouzounov, D.; Hernandez-Pajares, M.; Hattori, K.; Krankowski, A.; Zakharenkova, I.; Cherniak, I.

    2016-12-01

    We present the progress reached by the GPS TEC technologies in study of pre-seismic anomalies in the ionosphere appearing few days before the strong earthquakes. Starting from the first case studies such as 17 August 1999 M7.6 Izmit earthquake in Turkey the technology has been developed and converted into the global near real-time monitoring of seismo-ionospheric effects which is used now in the multiparameter nowcast and forecast of the strong earthquakes. Development of the techniques of the seismo-ionospheric anomalies identification was carried out in parallel with the development of the physical mechanism explaining these anomalies generation. It was established that the seismo-ionospheric anomalies have a self-similarity property, are dependent on the local time and are persistent at least for 4 hours, deviation from undisturbed level could be both positive and negative depending on the leading time (in days) to the moment of impending earthquake and from longitude of anomaly in relation to the epicenter longitude. Low latitude and near equatorial earthquakes demonstrate the magnetically conjugated effect, while the middle and high latitude earthquakes demonstrate the single anomaly over the earthquake preparation zone. From the anomalies morphology the physical mechanism was derived within the framework of the more complex Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling concept. In addition to the multifactor analysis of the GPS TEC time series the GIM MAP technology was applied also clearly showing the seismo-ionospheric anomalies locality and their spatial size correspondence to the Dobrovolsky determination of the earthquake preparation zone radius. Application of ionospheric tomography techniques permitted to study not only the total electron content variations but also the modification of the vertical distribution of electron concentration in the ionosphere before earthquakes. The statistical check of the ionospheric precursors passed the

  5. The Cascadia Subduction Zone and related subduction systems: seismic structure, intraslab earthquakes and processes, and earthquake hazards

    USGS Publications Warehouse

    Kirby, Stephen H.; Wang, Kelin; Dunlop, Susan

    2002-01-01

    The following report is the principal product of an international workshop titled “Intraslab Earthquakes in the Cascadia Subduction System: Science and Hazards” and was sponsored by the U.S. Geological Survey, the Geological Survey of Canada and the University of Victoria. This meeting was held at the University of Victoria’s Dunsmuir Lodge, Vancouver Island, British Columbia, Canada on September 18–21, 2000 and brought 46 participants from the U.S., Canada, Latin America and Japan. This gathering was organized to bring together active research investigators in the science of subduction and intraslab earthquake hazards. Special emphasis was given to “warm-slab” subduction systems, i.e., those systems involving young oceanic lithosphere subducting at moderate to slow rates, such as the Cascadia system in the U.S. and Canada, and the Nankai system in Japan. All the speakers and poster presenters provided abstracts of their presentations that were a made available in an abstract volume at the workshop. Most of the authors subsequently provided full articles or extended abstracts for this volume on the topics that they discussed at the workshop. Where updated versions were not provided, the original workshop abstracts have been included. By organizing this workshop and assembling this volume, our aim is to provide a global perspective on the science of warm-slab subduction, to thereby advance our understanding of internal slab processes and to use this understanding to improve appraisals of the hazards associated with large intraslab earthquakes in the Cascadia system. These events have been the most frequent and damaging earthquakes in western Washington State over the last century. As if to underscore this fact, just six months after this workshop was held, the magnitude 6.8 Nisqually earthquake occurred on February 28th, 2001 at a depth of about 55 km in the Juan de Fuca slab beneath the southern Puget Sound region of western Washington. The Governor

  6. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth

  7. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection.

    PubMed

    Kasagi, M; Fujita, K; Tsuji, M; Takewaki, I

    2016-02-01

    A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency) earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency) and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  8. Intensity earthquake scenario (scenario event - a damaging earthquake with higher probability of occurrence) for the city of Sofia

    NASA Astrophysics Data System (ADS)

    Aleksandrova, Irena; Simeonova, Stela; Solakov, Dimcho; Popova, Maria

    2014-05-01

    Among the many kinds of natural and man-made disasters, earthquakes dominate with regard to their social and economical impact on the urban environment. Global seismic risk to earthquakes are increasing steadily as urbanization and development occupy more areas that a prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The earthquake scenarios are intended as a basic input for developing detailed earthquake damage scenarios for the cities and can be used in earthquake-safe town and infrastructure planning. The city of Sofia is the capital of Bulgaria. It is situated in the centre of the Sofia area that is the most populated (the population is of more than 1.2 mil. inhabitants), industrial and cultural region of Bulgaria that faces considerable earthquake risk. The available historical documents prove the occurrence of destructive earthquakes during the 15th-18th centuries in the Sofia zone. In 19th century the city of Sofia has experienced two strong earthquakes: the 1818 earthquake with epicentral intensity I0=8-9 MSK and the 1858 earthquake with I0=9-10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK). Almost a century later (95 years) an earthquake of moment magnitude 5.6 (I0=7-8 MSK) hit the city of Sofia, on May 22nd, 2012. In the present study as a deterministic scenario event is considered a damaging earthquake with higher probability of occurrence that could affect the city with intensity less than or equal to VIII

  9. Preliminary Analysis of Remote Triggered Seismicity in Northern Baja California Generated by the 2011, Tohoku-Oki, Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Wong-Ortega, V.; Castro, R. R.; Gonzalez-Huizar, H.; Velasco, A. A.

    2013-05-01

    We analyze possible variations of seismicity in the northern Baja California due to the passage of seismic waves from the 2011, M9.0, Tohoku-Oki, Japan earthquake. The northwestern area of Baja California is characterized by a mountain range composed of crystalline rocks. These Peninsular Ranges of Baja California exhibits high microseismic activity and moderate size earthquakes. In the eastern region of Baja California shearing between the Pacific and the North American plates takes place and the Imperial and Cerro-Prieto faults generate most of the seismicity. The seismicity in these regions is monitored by the seismic network RESNOM operated by the Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE). This network consists of 13 three-component seismic stations. We use the seismic catalog of RESNOM to search for changes in local seismic rates occurred after the passing of surface waves generated by the Tohoku-Oki, Japan earthquake. When we compare one month of seismicity before and after the M9.0 earthquake, the preliminary analysis shows absence of triggered seismicity in the northern Peninsular Ranges and an increase of seismicity south of the Mexicali valley where the Imperial fault jumps southwest and the Cerro Prieto fault continues.

  10. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  11. Slow Unlocking Processes Preceding the 2015 Mw 8.4 Illapel, Chile, Earthquake

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Meng, Lingsen

    2018-05-01

    On 16 September 2015, the Mw 8.4 Illapel earthquake occurred in central Chile with no intense foreshock sequences documented in the regional earthquake catalog. Here we employ the matched-filter technique based on an enhanced template data set of previously catalogued events. We perform a continuous search over an 4-year period before the Illapel mainshock to recover the uncatalogued small events and repeating earthquakes. Repeating earthquakes are found both to the north and south of the mainshock rupture zone. To the south of the rupture zone, the seismicity and repeater-inferred aseismic slip progressively accelerate around the Illapel epicenter starting from 140 days before the mainshock. This may indicate an unlocking process involving the interplay of seismic and aseismic slip. The acceleration culminates in a M 5.3 event of low-angle thrust mechanism, which occurred 36 days before the Mw 8.4 mainshock. It is then followed by a relative quiescence in seismicity until the mainshock occurred. This quiescence might correspond to an intermediate period of stable slip before rupture initiation. In addition, to the north of the mainshock rupture area, the last aseismic-slip episode occurs within 175-95 days before the mainshock and accumulates the largest amount of slip in the observation period. The simultaneous occurrence of aseismic-slip transients over a large area is consistent with large-scale slow unlocking processes preceding the Illapel mainshock.

  12. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  13. Rupture process of large earthquakes in the northern Mexico subduction zone

    NASA Astrophysics Data System (ADS)

    Ruff, Larry J.; Miller, Angus D.

    1994-03-01

    The Cocos plate subducts beneath North America at the Mexico trench. The northernmost segment of this trench, between the Orozco and Rivera fracture zones, has ruptured in a sequence of five large earthquakes from 1973 to 1985; the Jan. 30, 1973 Colima event ( M s 7.5) at the northern end of the segment near Rivera fracture zone; the Mar. 14, 1979 Petatlan event ( M s 7.6) at the southern end of the segment on the Orozco fracture zone; the Oct. 25, 1981 Playa Azul event ( M s 7.3) in the middle of the Michoacan “gap”; the Sept. 19, 1985 Michoacan mainshock ( M s 8.1); and the Sept. 21, 1985 Michoacan aftershock ( M s 7.6) that reruptured part of the Petatlan zone. Body wave inversion for the rupture process of these earthquakes finds the best: earthquake depth; focal mechanism; overall source time function; and seismic moment, for each earthquake. In addition, we have determined spatial concentrations of seismic moment release for the Colima earthquake, and the Michoacan mainshock and aftershock. These spatial concentrations of slip are interpreted as asperities; and the resultant asperity distribution for Mexico is compared to other subduction zones. The body wave inversion technique also determines the Moment Tensor Rate Functions; but there is no evidence for statistically significant changes in the moment tensor during rupture for any of the five earthquakes. An appendix describes the Moment Tensor Rate Functions methodology in detail. The systematic bias between global and regional determinations of epicentral locations in Mexico must be resolved to enable plotting of asperities with aftershocks and geographic features. We have spatially “shifted” all of our results to regional determinations of epicenters. The best point source depths for the five earthquakes are all above 30 km, consistent with the idea that the down-dip edge of the seismogenic plate interface in Mexico is shallow compared to other subduction zones. Consideration of uncertainties in

  14. Evidences of landslide earthquake triggering due to self-excitation process

    NASA Astrophysics Data System (ADS)

    Bozzano, F.; Lenti, L.; Martino, Salvatore; Paciello, A.; Scarascia Mugnozza, G.

    2011-06-01

    The basin-like setting of stiff bedrock combined with pre-existing landslide masses can contribute to seismic amplifications in a wide frequency range (0-10 Hz) and induce a self-excitation process responsible for earthquake-triggered landsliding. Here, the self-excitation process is proposed to justify the far-field seismic trigger of the Cerda landslide (Sicily, Italy) which was reactivated by the 6th September 2002 Palermo earthquake ( M s = 5.4), about 50 km far from the epicentre. The landslide caused damage to farm houses, roads and aqueducts, close to the village of Cerda, and involved about 40 × 106 m3 of clay shales; the first ground cracks due to the landslide movement formed about 30 min after the main shock. A stress-strain dynamic numerical modelling, performed by FDM code FLAC 5.0, supports the notion that the combination of local geological setting and earthquake frequency content played a fundamental role in the landslide reactivation. Since accelerometric records of the triggering event are not available, dynamic equivalent inputs have been used for the numerical modelling. These inputs can be regarded as representative for the local ground shaking, having a PGA value up to 0.2 m/s2, which is the maximum expected in 475 years, according to the Italian seismic hazard maps. A 2D numerical modelling of the seismic wave propagation in the Cerda landslide area was also performed; it pointed out amplification effects due to both the structural setting of the stiff bedrock (at about 1 Hz) and the pre-existing landslide mass (in the range 3-6 Hz). The frequency peaks of the resulting amplification functions ( A( f)) fit well the H/ V spectral ratios from ambient noise and the H/ H spectral ratios to a reference station from earthquake records, obtained by in situ velocimetric measurements. Moreover, the Fourier spectra of earthquake accelerometric records, whose source and magnitude are consistent with the triggering event, show a main peak at about 1 Hz

  15. Keeping focus on earthquakes at school for seismic risk mitigation of the next generations

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Barnaba, Carla; Peruzza, Laura

    2013-04-01

    The knowledge of the seismic history of its own territory, the understanding of physical phenomena in response to an earthquake, the changes in the cultural heritage following a strong earthquake, the learning of actions to be taken during and after an earthquake, are piece of information that contribute to keep focus on the seismic hazard and to implement strategies for seismic risk mitigation. The training of new generations, today more than ever subject to rapid forgetting of past events, becomes therefore a key element to increase the perception that earthquakes happened and can happen at anytime and that mitigation actions are the only means to ensure the safety and to reduce damages and human losses. Since several years our institute (OGS) is involved in activities to raise awareness of education on earthquake. We aim to implement education programs with the goal of addressing a critical approach to seismic hazard reduction, differentiating the types of activities according to the age of the students. However, being such kind of activity unfunded, we can act at now only on a very limited number of schools per year. To be effective, the inclusion of the seismic risk issues in school curricula requires specific time and appropriate approaches when planning activities. For this reason, we involve also the teachers as proponents of activities and we encourage them to keep alive memories and discussion on earthquake in the classes. During the past years we acted mainly in the schools of the Friuli Venezia Giulia area (NE Italy), that is an earthquake prone area struck in 1976 by a destructive seismic event (Ms=6.5). We organized short training courses for teachers, we lectured classes, and we led laboratory activities with students. Indeed, being well known that students enjoy classes more when visual and active learning are joined, we propose a program that is composed by seminars, demonstrations and hands-on activities in the classrooms; for high school students

  16. Sedimentary Signatures of Submarine Earthquakes: Deciphering the Extent of Sediment Remobilization from the 2011 Tohoku Earthquake and Tsunami and 2010 Haiti Earthquake

    NASA Astrophysics Data System (ADS)

    McHugh, C. M.; Seeber, L.; Moernaut, J.; Strasser, M.; Kanamatsu, T.; Ikehara, K.; Bopp, R.; Mustaque, S.; Usami, K.; Schwestermann, T.; Kioka, A.; Moore, L. M.

    2017-12-01

    The 2004 Sumatra-Andaman Mw9.3 and the 2011 Tohoku (Japan) Mw9.0 earthquakes and tsunamis were huge geological events with major societal consequences. Both were along subduction boundaries and ruptured portions of these boundaries that had been deemed incapable of such events. Submarine strike-slip earthquakes, such as the 2010 Mw7.0 in Haiti, are smaller but may be closer to population centers and can be similarly catastrophic. Both classes of earthquakes remobilize sediment and leave distinct signatures in the geologic record by a wide range of processes that depends on both environment and earthquake characteristics. Understanding them has the potential of greatly expanding the record of past earthquakes, which is critical for geohazard analysis. Recent events offer precious ground truth about the earthquakes and short-lived radioisotopes offer invaluable tools to identify sediments they remobilized. In the 2011 Mw9 Japan earthquake they document the spatial extent of remobilized sediment from water depths of 626m in the forearc slope to trench depths of 8000m. Subbottom profiles, multibeam bathymetry and 40 piston cores collected by the R/V Natsushima and R/V Sonne expeditions to the Japan Trench document multiple turbidites and high-density flows. Core tops enriched in xs210Pb,137Cs and 134Cs reveal sediment deposited by the 2011 Tohoku earthquake and tsunami. The thickest deposits (2m) were documented on a mid-slope terrace and trench (4000-8000m). Sediment was deposited on some terraces (600-3000m), but shed from the steep forearc slope (3000-4000m). The 2010 Haiti mainshock ruptured along the southern flank of Canal du Sud and triggered multiple nearshore sediment failures, generated turbidity currents and stirred fine sediment into suspension throughout this basin. A tsunami was modeled to stem from both sediment failures and tectonics. Remobilized sediment was tracked with short-lived radioisotopes from the nearshore, slope, in fault basins including the

  17. Earthquake Testing

    NASA Technical Reports Server (NTRS)

    1979-01-01

    During NASA's Apollo program, it was necessary to subject the mammoth Saturn V launch vehicle to extremely forceful vibrations to assure the moonbooster's structural integrity in flight. Marshall Space Flight Center assigned vibration testing to a contractor, the Scientific Services and Systems Group of Wyle Laboratories, Norco, California. Wyle-3S, as the group is known, built a large facility at Huntsville, Alabama, and equipped it with an enormously forceful shock and vibration system to simulate the liftoff stresses the Saturn V would encounter. Saturn V is no longer in service, but Wyle-3S has found spinoff utility for its vibration facility. It is now being used to simulate earthquake effects on various kinds of equipment, principally equipment intended for use in nuclear power generation. Government regulations require that such equipment demonstrate its ability to survive earthquake conditions. In upper left photo, Wyle3S is preparing to conduct an earthquake test on a 25ton diesel generator built by Atlas Polar Company, Ltd., Toronto, Canada, for emergency use in a Canadian nuclear power plant. Being readied for test in the lower left photo is a large circuit breaker to be used by Duke Power Company, Charlotte, North Carolina. Electro-hydraulic and electro-dynamic shakers in and around the pit simulate earthquake forces.

  18. An Integrated Monitoring System of Pre-earthquake Processes in Peloponnese, Greece

    NASA Astrophysics Data System (ADS)

    Karastathis, V. K.; Tsinganos, K.; Kafatos, M.; Eleftheriou, G.; Ouzounov, D.; Mouzakiotis, E.; Papadopoulos, G. A.; Voulgaris, N.; Bocchini, G. M.; Liakopoulos, S.; Aspiotis, T.; Gika, F.; Tselentis, A.; Moshou, A.; Psiloglou, B.

    2017-12-01

    One of the controversial issues in the contemporary seismology is the ability of radon accumulation monitoring to provide reliable earthquake forecasting. Although there are many examples in the literature showing radon increase before earthquakes, skepticism arises from instability of the measurements, false alarms, difficulties in interpretation caused by the weather influence (eg. rainfall) and difficulties on the consideration an irrefutable theoretical background of the phenomenon.We have developed and extensively tested a multi parameter network aimed for studying of the pre-earthquake processes and operating as a part of integrated monitoring system in the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The prototype consists of four components: A real-time monitoring system of Radon accumulation. It consists of three gamma radiation detectors [NaI(Tl) scintillators] A nine-station seismic array to monitor the microseismicity in the offshore area of the Hellenic arc. The processing of the data is based on F-K and beam-forming techniques. Real-time weather monitoring systems for air temperature, relative humidity, precipitation and pressure. Thermal radiation emission from AVHRR/NOAA-18 polar orbit satellite observation. The project revolved around the idea of jointly studying the emission of Radon that has been proven in many cases as a reliable indicator of the possible time of an event, with the accurate location of the foreshock activity detected by the seismic array that can be a more reliable indicator of the possible position of an event. In parallel a satellite thermal anomaly detection technique has been used for monitoring of larger magnitude events (possible indicator for strong events M ≥5.0.). The first year of operations revealed a number of pre-seismic radon variation anomalies before several local earthquakes (M>3.6). The Radon increases systematically before the larger events.Details about the overall performance

  19. Strike-slip earthquakes can also be detected in the ionosphere

    NASA Astrophysics Data System (ADS)

    Astafyeva, Elvira; Rolland, Lucie M.; Sladen, Anthony

    2014-11-01

    It is generally assumed that co-seismic ionospheric disturbances are generated by large vertical static displacements of the ground during an earthquake. Consequently, it is expected that co-seismic ionospheric disturbances are only observable after earthquakes with a significant dip-slip component. Therefore, earthquakes dominated by strike-slip motion, i.e. with very little vertical co-seismic component, are not expected to generate ionospheric perturbations. In this work, we use total electron content (TEC) measurements from ground-based GNSS-receivers to study ionospheric response to six recent largest strike-slip earthquakes: the Mw7.8 Kunlun earthquake of 14 November 2001, the Mw8.1 Macquarie earthquake of 23 December 2004, the Sumatra earthquake doublet, Mw8.6 and Mw8.2, of 11 April 2012, the Mw7.7 Balochistan earthquake of 24 September 2013 and the Mw 7.7 Scotia Sea earthquake of 17 November 2013. We show that large strike-slip earthquakes generate large ionospheric perturbations of amplitude comparable with those induced by dip-slip earthquakes of equivalent magnitude. We consider that in the absence of significant vertical static co-seismic displacements of the ground, other seismological parameters (primarily the magnitude of co-seismic horizontal displacements, seismic fault dimensions, seismic slip) may contribute in generation of large-amplitude ionospheric perturbations.

  20. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  1. Frictional heating processes during laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Aubry, J.; Passelegue, F. X.; Deldicque, D.; Lahfid, A.; Girault, F.; Pinquier, Y.; Escartin, J.; Schubnel, A.

    2017-12-01

    Frictional heating during seismic slip plays a crucial role in the dynamic of earthquakes because it controls fault weakening. This study proposes (i) to image frictional heating combining an in-situ carbon thermometer and Raman microspectrometric mapping, (ii) to combine these observations with fault surface roughness and heat production, (iii) to estimate the mechanical energy dissipated during laboratory earthquakes. Laboratory earthquakes were performed in a triaxial oil loading press, at 45, 90 and 180 MPa of confining pressure by using saw-cut samples of Westerly granite. Initial topography of the fault surface was +/- 30 microns. We use a carbon layer as a local temperature tracer on the fault plane and a type K thermocouple to measure temperature approximately 6mm away from the fault surface. The thermocouple measures the bulk temperature of the fault plane while the in-situ carbon thermometer images the temperature production heterogeneity at the micro-scale. Raman microspectrometry on amorphous carbon patch allowed mapping the temperature heterogeneities on the fault surface after sliding overlaid over a few micrometers to the final fault roughness. The maximum temperature achieved during laboratory earthquakes remains high for all experiments but generally increases with the confining pressure. In addition, the melted surface of fault during seismic slip increases drastically with confining pressure. While melting is systematically observed, the strength drop increases with confining pressure. These results suggest that the dynamic friction coefficient is a function of the area of the fault melted during stick-slip. Using the thermocouple, we inverted the heat dissipated during each event. We show that for rough faults under low confining pressure, less than 20% of the total mechanical work is dissipated into heat. The ratio of frictional heating vs. total mechanical work decreases with cumulated slip (i.e. number of events), and decreases with

  2. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  3. Real-time GPS integration for prototype earthquake early warning and near-field imaging of the earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.

    2011-12-01

    Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very

  4. Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view

    NASA Astrophysics Data System (ADS)

    Locati, Mario; Rovida, Andrea; Albini, Paola

    2014-05-01

    Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.

  5. Effects of Fault Segmentation, Mechanical Interaction, and Structural Complexity on Earthquake-Generated Deformation

    ERIC Educational Resources Information Center

    Haddad, David Elias

    2014-01-01

    Earth's topographic surface forms an interface across which the geodynamic and geomorphic engines interact. This interaction is best observed along crustal margins where topography is created by active faulting and sculpted by geomorphic processes. Crustal deformation manifests as earthquakes at centennial to millennial timescales. Given that…

  6. Possible Mechanisms for Generation of Anomalously High PGA During the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Pavlenko, O. V.

    2017-08-01

    Mechanisms are suggested that could explain anomalously high PGAs (peak ground accelerations) exceeding 1 g recorded during the 2011 Tohoku earthquake ( M w = 9.0). In my previous research, I studied soil behavior during the Tohoku earthquake based on KiK-net vertical array records and revealed its `atypical' pattern: instead of being reduced in the near-source zones as usually observed during strong earthquakes, shear moduli in soil layers increased, indicating soil hardening, and reached their maxima at the moments of the highest intensity of strong motion, then reduced. We could explain this assuming that the soils experienced some additional compression. The observed changes in the shapes of acceleration time histories with distance from the source, such as a decrease of the duration and an increase of the intensity of strong motion, indicate phenomena similar to overlapping of seismic waves and a shock wave generation, which led to the compression of soils. The phenomena reach their maximum in the vicinity of stations FKSH10, TCGH16, and IBRH11, where the highest PGAs were recorded; at larger epicentral distances, PGAs sharply fall. Thus, the occurrence of anomalously high PGAs on the surface can result from the combination of the overlapping of seismic waves at the bottoms of soil layers and their increased amplification by the pre-compressed soils.

  7. Earthquake Rupture Process Inferred from Joint Inversion of 1-Hz GPS and Strong Motion Data: The 2008 Iwate-Miyagi Nairiku, Japan, Earthquake

    NASA Astrophysics Data System (ADS)

    Yokota, Y.; Koketsu, K.; Hikima, K.; Miyazaki, S.

    2009-12-01

    1-Hz GPS data can be used as a ground displacement seismogram. The capability of high-rate GPS to record seismic wave fields for large magnitude (M8 class) earthquakes has been demonstrated [Larson et al., 2003]. Rupture models were inferred solely and supplementarily from 1-Hz GPS data [Miyazaki et al., 2004; Ji et al., 2004; Kobayashi et al., 2006]. However, none of the previous studies have succeeded in inferring the source process of the medium-sized (M6 class) earthquake solely from 1-Hz GPS data. We first compared 1-Hz GPS data with integrated strong motion waveforms for the 2008 Iwate-Miyagi Nairiku, Japan, earthquake. We performed a waveform inversion for the rupture process using 1-Hz GPS data only [Yokota et al., 2009]. We here discuss the rupture processes inferred from the inversion of 1-Hz GPS data of GEONET only, the inversion of strong motion data of K-NET and KiK-net only, and the joint inversion of 1-Hz GPS and strong motion data. The data were inverted to infer the rupture process of the earthquake using the inversion codes by Yoshida et al. [1996] with the revisions by Hikima and Koketsu [2005]. In the 1-Hz GPS inversion result, the total seismic moment is 2.7×1019 Nm (Mw: 6.9) and the maximum slip is 5.1 m. These results are approximately equal to 2.4×1019 Nm and 4.5 m from the inversion of strong motion data. The difference in the slip distribution on the northern fault segment may come from long-period motions possibly recorded only in 1-Hz GPS data. In the joint inversion result, the total seismic moment is 2.5×1019 Nm and the maximum slip is 5.4 m. These values also agree well with the result of 1-Hz GPS inversion. In all the series of snapshots that show the dynamic features of the rupture process, the rupture propagated bilaterally from the hypocenter to the south and north. The northern rupture speed is faster than the northern one. These agreements demonstrate the ability of 1-Hz GPS data to infer not only static, but also dynamic

  8. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  9. Ionospheric Method of Detecting Tsunami-Generating Earthquakes.

    ERIC Educational Resources Information Center

    Najita, Kazutoshi; Yuen, Paul C.

    1978-01-01

    Reviews the earthquake phenomenon and its possible relation to ionospheric disturbances. Discusses the basic physical principles involved and the methods upon which instrumentation is being developed for possible use in a tsunami disaster warning system. (GA)

  10. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    PubMed

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the dual process model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the dual process model.

  11. Mothers Coping With Bereavement in the 2008 China Earthquake: A Dual Process Model Analysis.

    PubMed

    Chen, Lin; Fu, Fang; Sha, Wei; Chan, Cecilia L W; Chow, Amy Y M

    2017-01-01

    The purpose of this study is to explore the grief experiences of mothers after they lost their children in the 2008 China earthquake. Informed by the Dual Process Model, this study conducted in-depth interviews to explore how six bereaved mothers coped with such grief over a 2-year period. Right after the earthquake, these mothers suffered from intensive grief. They primarily coped with loss-oriented stressors. As time passed, these mothers began to focus on restoration-oriented stressors to face changes in life. This coping trajectory was a dynamic and integral process, which bereaved mothers oscillated between loss- and restoration-oriented stressors. This study offers insight in extending the existing empirical evidence of the Dual Process Model.

  12. Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.

    2016-12-01

    During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.

  13. Performance of Irikura Recipe Rupture Model Generator in Earthquake Ground Motion Simulations with Graves and Pitarka Hybrid Approach

    NASA Astrophysics Data System (ADS)

    Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur

    2017-09-01

    We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85-104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095-2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0-20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1-10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1-3 s where the IM2011 motions are about 20-30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1-3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of

  14. The Puerto Rico Seismic Network Broadcast System: A user friendly GUI to broadcast earthquake messages, to generate shakemaps and to update catalogues

    NASA Astrophysics Data System (ADS)

    Velez, J.; Huerfano, V.; von Hillebrandt, C.

    2007-12-01

    The Puerto Rico Seismic Network (PRSN) has historically provided locations and magnitudes for earthquakes in the Puerto Rico and Virgin Islands (PRVI) region. PRSN is the reporting authority for the region bounded by latitudes 17.0N to 20.0N, and longitudes 63.5W to 69.0W. The main objective of the PRSN is to record, process, analyze, provide information and research local, regional and teleseismic earthquakes, providing high quality data and information to be able to respond to the needs of the emergency management, academic and research communities, and the general public. The PRSN runs Earthworm software (Johnson et al, 1995) to acquire and write waveforms to disk for permanent archival. Automatic locations and alerts are generated for events in Puerto Rico, the Intra America Seas, and the Atlantic by the EarlyBird system (Whitmore and Sokolowski, 2002), which monitors PRSN stations as well as some 40 additional stations run by networks operating in North, Central and South America and other sites in the Caribbean. PRDANIS (Puerto Rico Data Analysis and Information System) software, developed by PRSN, supports manual locations and analyst review of automatic locations of events within the PRSN area of responsibility (AOR), using all the broadband, strong-motion and short-period waveforms Rapidly available information regarding the geographic distribution of ground shaking in relation to the population and infrastructure at risk can assist emergency response communities in efficient and optimized allocation of resources following a large earthquake. The ShakeMap system developed by the USGS provides near real-time maps of instrumental ground motions and shaking intensity and has proven effective in rapid assessment of the extent of shaking and potential damage after significant earthquakes (Wald, 2004). In Northern and Southern California, the Pacific Northwest, and the states of Utah and Nevada, ShakeMaps are used for emergency planning and response, loss

  15. POST Earthquake Debris Management - AN Overview

    NASA Astrophysics Data System (ADS)

    Sarkar, Raju

    Every year natural disasters, such as fires, floods, earthquakes, hurricanes, landslides, tsunami, and tornadoes, challenge various communities of the world. Earthquakes strike with varying degrees of severity and pose both short- and long-term challenges to public service providers. Earthquakes generate shock waves and displace the ground along fault lines. These seismic forces can bring down buildings and bridges in a localized area and damage buildings and other structures in a far wider area. Secondary damage from fires, explosions, and localized flooding from broken water pipes can increase the amount of debris. Earthquake debris includes building materials, personal property, and sediment from landslides. The management of this debris, as well as the waste generated during the reconstruction works, can place significant challenges on the national and local capacities. Debris removal is a major component of every post earthquake recovery operation. Much of the debris generated from earthquake is not hazardous. Soil, building material, and green waste, such as trees and shrubs, make up most of the volume of earthquake debris. These wastes not only create significant health problems and a very unpleasant living environment if not disposed of safely and appropriately, but also can subsequently impose economical burdens on the reconstruction phase. In practice, most of the debris may be either disposed of at landfill sites, reused as materials for construction or recycled into useful commodities Therefore, the debris clearance operation should focus on the geotechnical engineering approach as an important post earthquake issue to control the quality of the incoming flow of potential soil materials. In this paper, the importance of an emergency management perspective in this geotechnical approach that takes into account the different criteria related to the operation execution is proposed by highlighting the key issues concerning the handling of the construction

  16. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  17. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  18. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  19. Triggering Factor of Strong Earthquakes and Its Prediction Verification

    NASA Astrophysics Data System (ADS)

    Ren, Z. Q.; Ren, S. H.

    After 30 yearsS research, we have found that great earthquakes are triggered by tide- generation force of the moon. ItSs not the tide-generation force in classical view- points, but is a non-classical viewpoint tide-generation force. We call it as TGFR (Tide-Generation ForcesS Resonance). TGFR strongly depends on the tide-generation force at time of the strange astronomical points (SAP). The SAP mostly are when the moon and another celestial body are arranged with the earth along a straight line (with the same apparent right ascension or 180o difference), the other SAP are the turning points of the moonSs relatively motion to the earth. Moreover, TGFR have four different types effective areas. Our study indicates that a majority of earthquakes are triggering by the rare superimposition of TGFRsS effective areas. In China the great earthquakes in the plain area of Hebei Province, Taiwan, Yunnan Province and Sichuan province are trigger by the decompression TGFR; Other earthquakes are trig- gered by compression TGFR which are in Gansu Province, Ningxia Provinces and northwest direction of Beijing. The great earthquakes in Japan, California, southeast of Europe also are triggered by compression of the TGFR. and in the other part of the world like in Philippines, Central America countries, and West Asia, great earthquakes are triggered by decompression TGFR. We have carried out examinational immediate prediction cooperate TGFR method with other earthquake impending signals such as suggested by Professor Li Junzhi. The successful ratio is about 40%(from our fore- cast reports to the China Seismological Administration). Thus we could say the great earthquake can be predicted (include immediate earthquake prediction). Key words: imminent prediction; triggering factor; TGFR (Tide-Generation ForcesS Resonance); TGFR compression; TGFR compression zone; TGFR decompression; TGFR decom- pression zone

  20. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  1. Development and Progress of Education for Earthquake Disaster

    NASA Astrophysics Data System (ADS)

    Usui, Hiromoto

    We had experienced the great Hanshin-Awaji earthquake disaster around ten years ago. Recently, the succession of disaster memory to the next generation becomes an important action-assignment. Since the occurrence of huge earthquake is expected in the near future, it is important to teach widely the lesson of the great Hanshin-Awaji earthquake disaster to the next generation, and this educational activity is also important for the disaster mitigation strategy in Japan. In this project, the accumulated data of disaster memory can be utilized to construct the educational system for earthquake disaster, and the collaboration between Kobe University, local government, city, civic group and media organization can be exploited to characterize the educational system of earthquake disaster mitigation.

  2. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  3. Contribution of Satellite Gravimetry to Understanding Seismic Source Processes of the 2011 Tohoku-Oki Earthquake

    NASA Technical Reports Server (NTRS)

    Han, Shin-Chan; Sauber, Jeanne; Riva, Riccardo

    2011-01-01

    The 2011 great Tohoku-Oki earthquake, apart from shaking the ground, perturbed the motions of satellites orbiting some hundreds km away above the ground, such as GRACE, due to coseismic change in the gravity field. Significant changes in inter-satellite distance were observed after the earthquake. These unconventional satellite measurements were inverted to examine the earthquake source processes from a radically different perspective that complements the analyses of seismic and geodetic ground recordings. We found the average slip located up-dip of the hypocenter but within the lower crust, as characterized by a limited range of bulk and shear moduli. The GRACE data constrained a group of earthquake source parameters that yield increasing dip (7-16 degrees plus or minus 2 degrees) and, simultaneously, decreasing moment magnitude (9.17-9.02 plus or minus 0.04) with increasing source depth (15-24 kilometers). The GRACE solution includes the cumulative moment released over a month and demonstrates a unique view of the long-wavelength gravimetric response to all mass redistribution processes associated with the dynamic rupture and short-term postseismic mechanisms to improve our understanding of the physics of megathrusts.

  4. The Quanzhou large earthquake: environment impact and deep process

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Gao*, R.; Ye, Z.; Wang, C.

    2017-12-01

    The Quanzhou earthquake is the largest earthquake in China's southeast coast in history. The ancient city of Quanzhou and its adjacent areas suffered serious damage. Analysis of the impact of Quanzhou earthquake on human activities, ecological environment and social development will provide an example for the research on environment and human interaction.According to historical records, on the night of December 29, 1604, a Ms 8.0 earthquake occurred in the sea area at the east of Quanzhou (25.0°N, 119.5°E) with a focal depth of 25 kilometers. It affected to a maximum distance of 220 kilometers from the epicenter and caused serious damage. Quanzhou, which has been known as one of the world's largest trade ports during Song and Yuan periods was heavily destroyed by this earthquake. The destruction of the ancient city was very serious and widespread. The city wall collapsed in Putian, Nanan, Tongan and other places. The East and West Towers of Kaiyuan Temple, which are famous with magnificent architecture in history, were seriously destroyed.Therefore, an enormous earthquake can exert devastating effects on human activities and social development in the history. It is estimated that a more than Ms. 5.0 earthquake in the economically developed coastal areas in China can directly cause economic losses for more than one hundred million yuan. This devastating large earthquake that severely destroyed the Quanzhou city was triggered under a tectonic-extensional circumstance. In this coastal area of the Fujian Province, the crust gradually thins eastward from inland to coast (less than 29 km thick crust beneath the coast), the lithosphere is also rather thin (60 70 km), and the Poisson's ratio of the crust here appears relatively high. The historical Quanzhou Earthquake was probably correlated with the NE-striking Littoral Fault Zone, which is characterized by right-lateral slip and exhibiting the most active seismicity in the coastal area of Fujian. Meanwhile, tectonic

  5. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a

  6. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    The Longmen Shan, located in the conjunction of the eastern margin the Tibet plateau and Sichuan basin, is a typical area for studying the deformation pattern of the Tibet plateau. Following the 2008 Mw 7.9 Wenchuan earthquake (WE) rupturing the Longmen Shan Fault (LSF), a great deal of observations and studies on geology, geophysics, and geodesy have been carried out for this region, with results published successively in recent years. Using the 2D viscoelastic finite element model, introducing the rate-state friction law to the fault, this thesis makes modeling of the earthquake recurrence process and the dynamic evolutionary processes in an earthquake cycle of 10 thousand years. By analyzing the displacement, velocity, stresses, strain energy and strain energy increment fields, this work obtains the following conclusions: (1) The maximum coseismic displacement on the fault is on the surface, and the damage on the hanging wall is much more serious than that on the foot wall of the fault. If the detachment layer is absent, the coseismic displacement would be smaller and the relative displacement between the hanging wall and foot wall would also be smaller. (2) In every stage of the earthquake cycle, the velocities (especially the vertical velocities) on the hanging wall of the fault are larger than that on the food wall, and the values and the distribution patterns of the velocity fields are similar. While in the locking stage prior to the earthquake, the velocities in crust and the relative velocities between hanging wall and foot wall decrease. For the model without the detachment layer, the velocities in crust in the post-seismic stage is much larger than those in other stages. (3) The maximum principle stress and the maximum shear stress concentrate around the joint of the fault and detachment layer, therefore the earthquake would nucleate and start here. (4) The strain density distribution patterns in stages of the earthquake cycle are similar. There are two

  7. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  8. Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability.

    PubMed

    Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco

    2018-03-28

    Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.

  9. Practical Applications for Earthquake Scenarios Using ShakeMap

    NASA Astrophysics Data System (ADS)

    Wald, D. J.; Worden, B.; Quitoriano, V.; Goltz, J.

    2001-12-01

    estimates that will substantially improve over empirical relations at these frequencies will require developing cost-effective numerical tools for proper theoretical inclusion of known complex ground motion effects. Current efforts underway must continue in order to obtain site, basin, and deeper crustal structure, and to characterize and test 3D earth models (including attenuation and nonlinearity). In contrast, longer period synthetics (>2 sec) are currently being generated in a deterministic fashion to include 3D and shallow site effects, an improvement on empirical estimates alone. As progress is made, we will naturally incorporate such advances into the ShakeMap scenario earthquake and processing methodology. Our scenarios are currently used heavily in emergency response planning and loss estimation. Primary users include city, county, state and federal government agencies (e.g., the California Office of Emergency Services, FEMA, the County of Los Angeles) as well as emergency response planners and managers for utilities, businesses, and other large organizations. We have found the scenarios are also of fundamental interest to many in the media and the general community interested in the nature of the ground shaking likely experienced in past earthquakes as well as effects of rupture on known faults in the future.

  10. CISN ShakeAlert Earthquake Early Warning System Monitoring Tools

    NASA Astrophysics Data System (ADS)

    Henson, I. H.; Allen, R. M.; Neuhauser, D. S.

    2015-12-01

    CISN ShakeAlert is a prototype earthquake early warning system being developed and tested by the California Integrated Seismic Network. The system has recently been expanded to support redundant data processing and communications. It now runs on six machines at three locations with ten Apache ActiveMQ message brokers linking together 18 waveform processors, 12 event association processes and 4 Decision Module alert processes. The system ingests waveform data from about 500 stations and generates many thousands of triggers per day, from which a small portion produce earthquake alerts. We have developed interactive web browser system-monitoring tools that display near real time state-of-health and performance information. This includes station availability, trigger statistics, communication and alert latencies. Connections to regional earthquake catalogs provide a rapid assessment of the Decision Module hypocenter accuracy. Historical performance can be evaluated, including statistics for hypocenter and origin time accuracy and alert time latencies for different time periods, magnitude ranges and geographic regions. For the ElarmS event associator, individual earthquake processing histories can be examined, including details of the transmission and processing latencies associated with individual P-wave triggers. Individual station trigger and latency statistics are available. Detailed information about the ElarmS trigger association process for both alerted events and rejected events is also available. The Google Web Toolkit and Map API have been used to develop interactive web pages that link tabular and geographic information. Statistical analysis is provided by the R-Statistics System linked to a PostgreSQL database.

  11. Performance of Irikura recipe rupture model generator in earthquake ground motion simulations with Graves and Pitarka hybrid approach

    USGS Publications Warehouse

    Pitarka, Arben; Graves, Robert; Irikura, Kojiro; Miyake, Hiroe; Rodgers, Arthur

    2017-01-01

    We analyzed the performance of the Irikura and Miyake (Pure and Applied Geophysics 168(2011):85–104, 2011) (IM2011) asperity-based kinematic rupture model generator, as implemented in the hybrid broadband ground motion simulation methodology of Graves and Pitarka (Bulletin of the Seismological Society of America 100(5A):2095–2123, 2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0–20 Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (Bulletin of the Seismological Society of America, 2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-West2 Project (NGA-West2) ground motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-fault distances (<12 km) and at long periods (>1 s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1–3 s where the IM2011 motions are about 20–30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1–3 s bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is

  12. Dilational processes accompanying earthquakes in the Long Valley Caldera

    USGS Publications Warehouse

    Dreger, Douglas S.; Tkalcic, Hrvoje; Johnston, M.

    2000-01-01

    Regional distance seismic moment tensor determinations and broadband waveforms of moment magnitude 4.6 to 4.9 earthquakes from a November 1997 Long Valley Caldera swarm, during an inflation episode, display evidence of anomalous seismic radiation characterized by non-double couple (NDC) moment tensors with significant volumetric components. Observed coseismic dilation suggests that hydrothermal or magmatic processes are directly triggering some of the seismicity in the region. Similarity in the NDC solutions implies a common source process, and the anomalous events may have been triggered by net fault-normal stress reduction due to high-pressure fluid injection or pressurization of fluid-saturated faults due to magmatic heating.

  13. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul

    2017-12-01

    This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  14. POST Earthquake Debris Management — AN Overview

    NASA Astrophysics Data System (ADS)

    Sarkar, Raju

    Every year natural disasters, such as fires, floods, earthquakes, hurricanes, landslides, tsunami, and tornadoes, challenge various communities of the world. Earthquakes strike with varying degrees of severity and pose both short- and long-term challenges to public service providers. Earthquakes generate shock waves and displace the ground along fault lines. These seismic forces can bring down buildings and bridges in a localized area and damage buildings and other structures in a far wider area. Secondary damage from fires, explosions, and localized flooding from broken water pipes can increase the amount of debris. Earthquake debris includes building materials, personal property, and sediment from landslides. The management of this debris, as well as the waste generated during the reconstruction works, can place significant challenges on the national and local capacities. Debris removal is a major component of every post earthquake recovery operation. Much of the debris generated from earthquake is not hazardous. Soil, building material, and green waste, such as trees and shrubs, make up most of the volume of earthquake debris. These wastes not only create significant health problems and a very unpleasant living environment if not disposed of safely and appropriately, but also can subsequently impose economical burdens on the reconstruction phase. In practice, most of the debris may be either disposed of at landfill sites, reused as materials for construction or recycled into useful commodities Therefore, the debris clearance operation should focus on the geotechnical engineering approach as an important post earthquake issue to control the quality of the incoming flow of potential soil materials. In this paper, the importance of an emergency management perspective in this geotechnical approach that takes into account the different criteria related to the operation execution is proposed by highlighting the key issues concerning the handling of the construction

  15. Variation of nitric oxide concentration before the Kobe earthquake, Japan

    NASA Astrophysics Data System (ADS)

    Matsuda, Tokiyoshi; Ikeya, Motoji

    The variation and spatial distribution of the atmospheric concentration of nitric oxide (NO) near the epicenter of the Kobe earthquake at local time 5:46, 17 January 1995 have been studied using data at monitoring stations of the local environmental protection agencies. The concentration of NO 8 days before the earthquake was 199 ppb, about ten times larger than the average peak level of 19 ppb, accompanying the retrospectively reported precursory earthquake lightning, increase of radon concentration in well water and of the counts of electromagnetic (EM) signals. The reported thunderstorm over the Japan Sea about 150 km away was too far for the thunder-generated NO to reach the epicenter area. The concentration of NO was also found to have increased before other major earthquakes (Magnitude>5.0) in Japan. Atmospheric discharges by electric charges or EM waves before earthquakes may have generated NO. However, the generation of NO by human activities of fuel combustion soon after holidays is enormously high every year, which makes it difficult to clearly link the increase with the earthquakes. The increase soon after the earthquake due to traffic jams is clear. The concentration of NO should be monitored at a several sites away from human activities as background data of natural variation and to study its generation at a seismic area before a large earthquake.

  16. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  17. Stochastic strong motion generation using slip model of 21 and 22 May 1960 mega-thrust earthquakes in the main cities of Central-South Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Ojeda, J.; DelCampo, F., Sr.; Pasten, C., Sr.; Otarola, C., Sr.; Silva, R., Sr.

    2017-12-01

    In May 1960 took place the most unusual seismic sequence registered instrumentally. The Mw 8.1, Concepción earthquake occurred May, 21, 1960. The aftershocks of this event apparently migrated to the south-east, and the Mw 9.5, Valdivia mega-earthquake occurred after 33 hours. The structural damage produced by both events is not larger than other earthquakes in Chile and lower than crustal earthquakes of smaller magnitude. The damage was located in the sites with shallow soil layers of low shear wave velocity (Vs). However, no seismological station recorded this sequence. For that reason, we generate synthetic acceleration times histories for strong motion in the main cities affected by these events. We use 155 points of vertical surface displacements recopiled by Plafker and Savage in 1968, and considering the observations of this authors and local residents we separated the uplift and subsidence information associated to the first earthquake Mw 8.1 and the second mega-earthquake Mw 9.5. We consider the elastic deformation propagation, assume realist lithosphere geometry, and compute a Bayesian method that maximizes the probability density a posteriori to obtain the slip distribution. Subsequently, we use a stochastic method of generation of strong motion considering the finite fault model obtained for both earthquakes. We considered the incidence angle of ray to the surface, free surface effect and energy partition for P, SV and SH waves, dynamic corner frequency and the influence of site effect. The results show that the earthquake Mw 8.1 occurred down-dip the slab, the strong motion records are similar to other Chilean earthquake like Tocopilla Mw 7.7 (2007). For the Mw 9.5 earthquake we obtain synthetic acceleration time histories with PGA values around 0.8 g in cities near to the maximum asperity or that have low velocity soil layers. This allows us to conclude that strong motion records have important influence of the shallow soil deposits. These records

  18. Understanding continental megathrust earthquake potential through geological mountain building processes: an example in Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Zhang, Zhen; Wang, Liangshu; Leroy, Yves; shi, Yaolin

    2017-04-01

    How to reconcile continent megathrust earthquake characteristics, for instances, mapping the large-great earthquake sequences into geological mountain building process, as well as partitioning the seismic-aseismic slips, is fundamental and unclear. Here, we scope these issues by focusing a typical continental collisional belt, the great Nepal Himalaya. We first prove that refined Nepal Himalaya thrusting sequences, with accurately defining of large earthquake cycle scale, provide new geodynamical hints on long-term earthquake potential in association with, either seismic-aseismic slip partition up to the interpretation of the binary interseismic coupling pattern on the Main Himalayan Thrust (MHT), or the large-great earthquake classification via seismic cycle patterns on MHT. Subsequently, sequential limit analysis is adopted to retrieve the detailed thrusting sequences of Nepal Himalaya mountain wedge. Our model results exhibit apparent thrusting concentration phenomenon with four thrusting clusters, entitled as thrusting 'families', to facilitate the development of sub-structural regions respectively. Within the hinterland thrusting family, the total aseismic shortening and the corresponding spatio-temporal release pattern are revealed by mapping projection. Whereas, in the other three families, mapping projection delivers long-term large (M<8)-great (M>8) earthquake recurrence information, including total lifespans, frequencies and large-great earthquake alternation information by identifying rupture distances along the MHT. In addition, this partition has universality in continental-continental collisional orogenic belt with identified interseismic coupling pattern, while not applicable in continental-oceanic megathrust context.

  19. Sensing the earthquake

    NASA Astrophysics Data System (ADS)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  20. The Early Warning System(EWS) as First Stage to Generate and Develop Shake Map for Bucharest to Deep Vrancea Earthquakes

    NASA Astrophysics Data System (ADS)

    Marmureanu, G.; Ionescu, C.; Marmureanu, A.; Grecu, B.; Cioflan, C.

    2007-12-01

    EWS made by NIEP is the first European system for real-time early detection and warning of the seismic waves in case of strong deep earthquakes. EWS uses the time interval (28-32 seconds) between the moment when earthquake is detected by the borehole and surface local accelerometers network installed in the epicenter area (Vrancea) and the arrival time of the seismic waves in the protected area, to deliver timely integrated information in order to enable actions to be taken before a main destructive shaking takes place. Early warning system is viewed as part of an real-time information system that provide rapid information, about an earthquake impeding hazard, to the public and disaster relief organizations before (early warning) and after a strong earthquake (shake map).This product is fitting in with other new product on way of National Institute for Earth Physics, that is, the shake map which is a representation of ground shaking produced by an event and it will be generated automatically following large Vrancea earthquakes. Bucharest City is located in the central part of the Moesian platform (age: Precambrian and Paleozoic) in the Romanian Plain, at about 140 km far from Vrancea area. Above a Cretaceous and a Miocene deposit (with the bottom at roundly 1,400 m of depth), a Pliocene shallow water deposit (~ 700m thick) was settled. The surface geology consists mainly of Quaternary alluvial deposits. Later loess covered these deposits and the two rivers crossing the city (Dambovita and Colentina) carved the present landscape. During the last century Bucharest suffered heavy damage and casualties due to 1940 (Mw = 7.7) and 1977 (Mw = 7.4) Vrancea earthquakes. For example, 32 high tall buildings collapsed and more then 1500 people died during the 1977 event. The innovation with comparable or related systems worldwide is that NIEP will use the EWS to generate a virtual shake map for Bucharest (140 km away of epicentre) immediately after the magnitude is estimated

  1. The 40 anniversary of the 1976 Friuli earthquake: a look back for empowering the next generation to the reduction of seismic risk

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Barnaba, Carla; Peruzza, Laura

    2016-04-01

    On 6 May 1976 an Ms=6.5 earthquake struck the Friuli area (NE Italy), causing about 1,000 casualties, and widespread destruction. Such event is the largest so far recorded in Northern Italy. After 40 years, the memory of a devastating earthquake remains in the urbanization, and in the people that lived that dreadful experience. However, the memories tend to vanish with the quake survivors demise and the celebration of the anniversary become a good opportunity to refresh the earthquake history, and the awareness of living in a seismic prone area. As seismologists, we believe that the seismic risk reduction starts from the education of the next generation. For this reason, we decided to celebrate the 40 anniversary planning a special educational campaign, mainly devoted to the schools and the young people, but it will give us the opportunity to check and, if necessary to raise, the level of seismic awareness of the local communities. The activities started on May 2015, with labs and lessons held in some schools, and the creation of a blog (https://versoi40anni.wordpress.com) to collect news, photos, video and all the materials related to the campaign. From February to May 2016, one day per week, we will open our seismological lab to the school visits, so that students can meet the seismologists, and we will cooperate with local science museums to enlarge the training offers on the earthquake topics. By continuing the efforts of our previous educational projects, the students of a school located in Gemona del Friuli, one of the small town destroyed by the 1976 earthquake, will be deeply involved in experimental activities, like seismic noise measurements for microzonation studies, so to be an active part of the seismic mitigation process. This and some other activities developed for the celebration of the 40 anniversary of the Friuli earthquake will be illustrated in this presentation.

  2. Earthquake Emergency Education in Dushanbe, Tajikistan

    ERIC Educational Resources Information Center

    Mohadjer, Solmaz; Bendick, Rebecca; Halvorson, Sarah J.; Saydullaev, Umed; Hojiboev, Orifjon; Stickler, Christine; Adam, Zachary R.

    2010-01-01

    We developed a middle school earthquake science and hazards curriculum to promote earthquake awareness to students in the Central Asian country of Tajikistan. These materials include pre- and post-assessment activities, six science activities describing physical processes related to earthquakes, five activities on earthquake hazards and mitigation…

  3. Earthquake triggering in southeast Africa following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo

    2018-02-01

    In this paper we present evidence of earthquake dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the rate of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean earthquake. We found evidence of triggered local earthquakes and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local earthquakes using β-statistics. Statistically significant dynamic triggering of local earthquakes was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie earthquake triggering.

  4. Source process of the Sikkim earthquake 18th September, 2011, inferred from teleseismic body-wave inversion.

    NASA Astrophysics Data System (ADS)

    Earnest, A.; Sunil, T. C.

    2014-12-01

    The recent earthquake of Mw 6.9 occurred on September 18, 2011 in Sikkim-Nepal border region. The hypocenter parameters determined by the Indian Meteorological Department shows that the epicentre is at 27.7°N, 88.2°E and focal depth of 58Km, located closed to the north-western terminus of Tista lineament. The reported aftershocks are linearly distributed in between Tista and Golapara lineament. The microscopic and geomorphologic studies infer a dextral strike-slip faulting, possibly along a NW-SE oriented fault. Landslides caused by this earthquake are distributed along Tista lineament . On the basis of the aftershock distribution, Kumar et al. (2012), have suggested possible NW orientation of the causative fault plane. The epicentral region of Sikkim bordered by Nepal, Bhutan and Tibet, comprises a segment of relatively lower level seismicity in the 2500km stretch of the active Himalayan Belt. The north Sikkim earthquake was felt in most parts of Sikkim and eastern Nepal; it killed more than 100 people and caused damage to buildings, roads and communication infrastructure. Through this study we focus on the earthquake source parameters and the kinematic rupture process of this particular event. We used tele-seismic body waveformsto determine the rupture pattern of earthquake. The seismic-rupture pattern are generally complex, and the result could be interpreted in terms of a distribution of asperities and barriers on the particular fault plane (Kikuchi and Kanamori, 1991).The methodology we adopted is based on the teleseismic body wave inversion methodology by Kikuchi and Kanamori (1982, 1986 and 1991). We used tele-seismic P-wave records observed at teleseismic distances between 50° and 90° with a good signal to noise ratio. Teleseismic distances in the range between 50° and 90° were used, in order to avoid upper mantle and core triplications and to limit the path length within the crust. Synthetic waveforms were generated gives a better fit with triangular

  5. Development of a borehole stress meter for studying earthquake predictions and rock mechanics, and stress seismograms of the 2011 Tohoku earthquake ( M 9.0)

    NASA Astrophysics Data System (ADS)

    Ishii, Hiroshi; Asai, Yasuhiro

    2015-02-01

    Although precursory signs of an earthquake can occur before the event, it is difficult to observe such signs with precision, especially on earth's surface where artificial noise and other factors complicate signal detection. One possible solution to this problem is to install monitoring instruments into the deep bedrock where earthquakes are likely to begin. When evaluating earthquake occurrence, it is necessary to elucidate the processes of stress accumulation in a medium and then release as a fault (crack) is generated, and to do so, the stress must be observed continuously. However, continuous observations of stress have not been implemented yet for earthquake monitoring programs. Strain is a secondary physical quantity whose variation varies depending on the elastic coefficient of the medium, and it can yield potentially valuable information as well. This article describes the development of a borehole stress meter that is capable of recording both continuous stress and strain at a depth of about 1 km. Specifically, this paper introduces the design principles of the stress meter as well as its actual structure. It also describes a newly developed calibration procedure and the results obtained to date for stress and strain studies of deep boreholes at three locations in Japan. To show examples of the observations, records of stress seismic waveforms generated by the 2011 Tohoku earthquake ( M 9.0) are presented. The results demonstrate that the stress meter data have sufficient precision and reliability.

  6. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  7. Earthquakes initiation and thermal shear instability in the Hindu Kush intermediate depth nest

    NASA Astrophysics Data System (ADS)

    Poli, Piero; Prieto, German; Rivera, Efrain; Ruiz, Sergio

    2016-02-01

    Intermediate depth earthquakes often occur along subducting lithosphere, but despite their ubiquity the physical mechanism responsible for promoting brittle or brittle-like failure is not well constrained. Large concentrations of intermediate depth earthquakes have been found to be related to slab break-off, slab drip, and slab tears. The intermediate depth Hindu Kush nest is one of the most seismically active regions in the world and shows the correlation of a weak region associated with ongoing slab detachment process. Here we study relocated seismicity in the nest to constraint the geometry of the shear zone at the top of the detached slab. The analysis of the rupture process of the Mw 7.5 Afghanistan 2015 earthquake and other several well-recorded events over the past 25 years shows an initially slow, highly dissipative rupture, followed by a dramatic dynamic frictional stress reduction and corresponding large energy radiation. These properties are typical of thermal driven rupture processes. We infer that thermal shear instabilities are a leading mechanism for the generation of intermediated-depth earthquakes especially in presence of weak zone subjected to large strain accumulation, due to ongoing detachment process.

  8. The influence of one earthquake on another

    NASA Astrophysics Data System (ADS)

    Kilb, Deborah Lyman

    1999-12-01

    Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictable earthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

  9. A Bayesian analysis of the 2016 Pedernales (Ecuador) earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Gombert, B.; Duputel, Z.; Jolivet, R.; Rivera, L. A.; Simons, M.; Jiang, J.; Liang, C.; Fielding, E. J.

    2017-12-01

    The 2016 Mw = 7.8 Pedernales earthquake is the largest event to strike Ecuador since 1979. Long period W-phase and Global CMT solutions suggest that slip is not perpendicular to the trench axis, in agreement with the convergence obliquity of the Ecuadorian subduction. In this study, we propose a new co-seismic kinematic slip model obtained from the joint inversion of multiple observations in an unregularized and fully Bayesian framework. We use a comprehensive static dataset composed of several InSAR scenes, GPS static offsets, and tsunami waveforms from two nearby DART stations. The kinematic component of the rupture process is constrained by an extensive network of High-Rate GPS and accelerometers. Our solution includes the ensemble of all plausible models that are consistent with our prior information and fit the available observations within data and prediction uncertainties. We analyse the source process in light of the historical seismicity, in particular the Mw = 7.8 1942 earthquake for which the rupture extent overlaps with the 2016 event. In addition, we conduct a probabilistic comparison of co-seismic slip with a stochastic interseismic coupling model obtained from GPS data, putting a light on the processes at play within the Ecuadorian subduction margin.

  10. Injection-induced earthquakes

    USGS Publications Warehouse

    Ellsworth, William L.

    2013-01-01

    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  11. Earthquake Warning Performance in Vallejo for the South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Wurman, G.; Price, M.

    2014-12-01

    In 2002 and 2003, Seismic Warning Systems, Inc. installed first-generation QuakeGuardTM earthquake warning devices at all eight fire stations in Vallejo, CA. These devices are designed to detect the P-wave of an earthquake and initiate predetermined protective actions if the impending shaking is estimated at approximately Modifed Mercalli Intensity V or greater. At the Vallejo fire stations the devices were set up to sound an audio alert over the public address system and to command the equipment bay doors to open. In August 2014, after more than 11 years of operating in the fire stations with no false alarms, the five units that were still in use triggered correctly on the MW 6.0 South Napa earthquake, less than 16 km away. The audio alert sounded in all five stations, providing fire fighters with 1.5 to 2.5 seconds of warning before the arrival of the S-wave, and the equipment bay doors opened in three of the stations. In one station the doors were disconnected from the QuakeGuard device, and another station lost power before the doors opened completely. These problems highlight just a small portion of the complexity associated with realizing actionable earthquake warnings. The issues experienced in this earthquake have already been addressed in subsequent QuakeGuard product generations, with downstream connection monitoring and backup power for critical systems. The fact that the fire fighters in Vallejo were afforded even two seconds of warning at these epicentral distances results from the design of the QuakeGuard devices, which focuses on rapid false positive rejection and ground motion estimates. We discuss the performance of the ground motion estimation algorithms, with an emphasis on the accuracy and timeliness of the estimates at close epicentral distances.

  12. Acceleration and volumetric strain generated by the Parkfield 2004 earthquake on the GEOS strong-motion array near Parkfield, California

    USGS Publications Warehouse

    Borcherdt, Rodger D.; Johnston, Malcolm J.S.; Dietel, Christopher; Glassmoyer, Gary; Myren, Doug; Stephens, Christopher

    2004-01-01

    An integrated array of 11 General Earthquake Observation System (GEOS) stations installed near Parkfield, CA provided on scale broad-band, wide-dynamic measurements of acceleration and volumetric strain of the Parkfield earthquake (M 6.0) of September 28, 2004. Three component measurements of acceleration were obtained at each of the stations. Measurements of collocated acceleration and volumetric strain were obtained at four of the stations. Measurements of velocity at most sites were on scale only for the initial P-wave arrival. When considered in the context of the extensive set of strong-motion recordings obtained on more than 40 analog stations by the California Strong-Motion Instrumentation Program (Shakal, et al., 2004 http://www.quake.ca.gov/cisn-edc) and those on the dense array of Spudich, et al, (1988), these recordings provide an unprecedented document of the nature of the near source strong motion generated by a M 6.0 earthquake. The data set reported herein provides the most extensive set of near field broad band wide dynamic range measurements of acceleration and volumetric strain for an earthquake as large as M 6 of which the authors are aware. As a result considerable interest has been expressed in these data. This report is intended to describe the data and facilitate its use to resolve a number of scientific and engineering questions concerning earthquake rupture processes and resultant near field motions and strains. This report provides a description of the array, its scientific objectives and the strong-motion recordings obtained of the main shock. The report provides copies of the uncorrected and corrected data. Copies of the inferred velocities, displacements, and Psuedo velocity response spectra are provided. Digital versions of these recordings are accessible with information available through the internet at several locations: the National Strong-Motion Program web site (http://agram.wr.usgs.gov/), the COSMOS Virtual Data Center Web site

  13. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  14. Source process of the MW7.8 2016 Kaikoura earthquake in New Zealand and the characteristics of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zang, Y.; Zhou, L.; Han, Y.

    2017-12-01

    The MW7.8 New Zealand earthquake of 2016 occurred near the Kaikoura area in the South Island, New Zealand with the epicenter of 173.13°E and 42.78°S. The MW7.8 Kaikoura earthquake occurred on the transform boundary faults between the Pacific plate and the Australian plate and with the thrust focal mechanism solution. The Kaikoura earthquake is a complex event because the significant difference, especially between the magnitude, seismic moment, radiated energy and the casualties. Only two people were killed, and twenty people injured and no more than twenty buildings are destroyed during this earthquake, the damage level is not so severe in consideration about the huge magnitude. We analyzed the rupture process according to the source parameters, it can be confirmed that the radiated energy and the apparent stress of the Kaikoura earthquake are small and minor. The results indicate a frictional overshoot behavior in the dynamic source process of Kaikoura earthquake, which is actually with sufficient rupture and more affluent moderate aftershocks. It is also found that the observed horizontal Peak Ground Acceleration of the strong ground motion is generally small comparing with the Next Generation Attenuation relationship. We further studied the characteristics of the observed horizontal PGAs at the 6 near fault stations, which are located in the area less than 10 km to the main fault. The relatively high level strong ground motion from the 6 stations may be produced by the higher slip around the asperity area rather than the initial rupture position on the main plane. Actually, the huge surface displacement at the northern of the rupture fault plane indicated why aftershocks are concentrated in the north. And there are more damage in Wellington than in Christchurch, even which is near the south of the epicenter. In conclusion, the less damage level of Kaikoura earthquake in New Zealand may probably because of the smaller strong ground motion and the rare

  15. Constraints on the rupture process of the 17 August 1999 Izmit earthquake

    NASA Astrophysics Data System (ADS)

    Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.

    2003-04-01

    Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.

  16. Tsunami waves generated by dynamically triggered aftershocks of the 2010 Haiti earthquake

    NASA Astrophysics Data System (ADS)

    Ten Brink, U. S.; Wei, Y.; Fan, W.; Miller, N. C.; Granja, J. L.

    2017-12-01

    Dynamically-triggered aftershocks, thought to be set off by the passage of surface waves, are currently not considered in tsunami warnings, yet may produce enough seafloor deformation to generate tsunamis on their own, as judged from new findings about the January 12, 2010 Haiti earthquake tsunami in the Caribbean Sea. This tsunami followed the Mw7.0 Haiti mainshock, which resulted from a complex rupture along the north shore of Tiburon Peninsula, not beneath the Caribbean Sea. The mainshock, moreover, had a mixed strike-slip and thrust focal mechanism. There were no recorded aftershocks in the Caribbean Sea, only small coastal landslides and rock falls on the south shore of Tiburon Peninsula. Nevertheless, a tsunami was recorded on deep-sea DART buoy 42407 south of the Dominican Republic and on the Santo Domingo tide gauge, and run-ups of ≤3 m were observed along a 90-km-long stretch of the SE Haiti coast. Three dynamically-triggered aftershocks south of Haiti have been recently identified within the coda of the mainshock (<200 s) by analyzing P wave arrivals recorded by dense seismic arrays, parsing the arrivals into 20-s-long stacks, and back-projecting the arrivals to the vicinity of the main shock (50-300 km). Two of the aftershocks, coming 20-40 s and 40-60 s after the mainshock, plot along NW-SE-trending submarine ridges in the Caribbean Sea south of Haiti. The third event, 120-140 s was located along the steep eastern slope of Bahoruco Peninsula, which is delineated by a normal fault. Forward tsunami models show that the arrival times of the DART buoy and tide gauge times are best fit by the earliest of the three aftershocks, with a Caribbean source 60 km SW of the mainshock rupture zone. Preliminary inversion of the DART buoy time series for fault locations and orientations confirms the location of the first source, but requires an additional unidentified source closer to shore 40 km SW of the mainshock rupture zone. This overall agreement between

  17. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  18. Multi-segment earthquakes and tsunami potential of the Aleutian megathrust

    USGS Publications Warehouse

    Shennan, I.; Bruhn, R.; Plafker, G.

    2009-01-01

    Large to great earthquakes and related tsunamis generated on the Aleutian megathrust produce major hazards for both the area of rupture and heavily populated coastlines around much of the Pacific Ocean. Here we use paleoseismic records preserved in coastal sediments to investigate whether segment boundaries control the largest ruptures or whether in some seismic cycles segments combine to produce earthquakes greater than any observed since instrumented records began. Virtually the entire megathrust has ruptured since AD1900, with four different segments generating earthquakes >M8.0. The largest was the M9.2 great Alaska earthquake of March 1964 that ruptured ???800 km of the eastern segment of the megathrust. The tsunami generated caused fatalities in Alaska and along the coast as far south as California. East of the 1964 zone of deformation, the Yakutat microplate experienced two >M8.0 earthquakes, separated by a week, in September 1899. For the first time, we present evidence that earthquakes ???900 and ???1500 years ago simultaneously ruptured adjacent segments of the Aleutian megathrust and the Yakutat microplate, with a combined area ???15% greater than 1964, giving an earthquake of greater magnitude and increased tsunamigenic potential. ?? 2008 Elsevier Ltd. All rights reserved.

  19. Earthquakes induced by fluid injection and explosion

    USGS Publications Warehouse

    Healy, J.H.; Hamilton, R.M.; Raleigh, C.B.

    1970-01-01

    Earthquakes generated by fluid injection near Denver, Colorado, are compared with earthquakes triggered by nuclear explosion at the Nevada Test Site. Spatial distributions of the earthquakes in both cases are compatible with the hypothesis that variation of fluid pressure in preexisting fractures controls the time distribution of the seismic events in an "aftershock" sequence. We suggest that the fluid pressure changes may also control the distribution in time and space of natural aftershock sequences and of earthquakes that have been reported near large reservoirs. ?? 1970.

  20. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  1. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  2. Earthquakes in Hawai‘i—an underappreciated but serious hazard

    USGS Publications Warehouse

    Okubo, Paul G.; Nakata, Jennifer S.

    2011-01-01

    The State of Hawaii has a history of damaging earthquakes. Earthquakes in the State are primarily the result of active volcanism and related geologic processes. It is not a question of "if" a devastating quake will strike Hawai‘i but rather "when." Tsunamis generated by both distant and local quakes are also an associated threat and have caused many deaths in the State. The U.S. Geological Survey (USGS) and its cooperators monitor seismic activity in the State and are providing crucial information needed to help better prepare emergency managers and residents of Hawai‘i for the quakes that are certain to strike in the future.

  3. Relating triggering processes in lab experiments with earthquakes.

    NASA Astrophysics Data System (ADS)

    Baro Urbea, J.; Davidsen, J.; Kwiatek, G.; Charalampidou, E. M.; Goebel, T.; Stanchits, S. A.; Vives, E.; Dresen, G.

    2016-12-01

    Statistical relations such as Gutenberg-Richter's, Omori-Utsu's and the productivity of aftershocks were first observed in seismology, but are also common to other physical phenomena exhibiting avalanche dynamics such as solar flares, rock fracture, structural phase transitions and even stock market transactions. All these examples exhibit spatio-temporal correlations that can be explained as triggering processes: Instead of being activated as a response to external driving or fluctuations, some events are consequence of previous activity. Although different plausible explanations have been suggested in each system, the ubiquity of such statistical laws remains unknown. However, the case of rock fracture may exhibit a physical connection with seismology. It has been suggested that some features of seismology have a microscopic origin and are reproducible over a vast range of scales. This hypothesis has motivated mechanical experiments to generate artificial catalogues of earthquakes at a laboratory scale -so called labquakes- and under controlled conditions. Microscopic fractures in lab tests release elastic waves that are recorded as ultrasonic (kHz-MHz) acoustic emission (AE) events by means of piezoelectric transducers. Here, we analyse the statistics of labquakes recorded during the failure of small samples of natural rocks and artificial porous materials under different controlled compression regimes. Temporal and spatio-temporal correlations are identified in certain cases. Specifically, we distinguish between the background and triggered events, revealing some differences in the statistical properties. We fit the data to statistical models of seismicity. As a particular case, we explore the branching process approach simplified in the Epidemic Type Aftershock Sequence (ETAS) model. We evaluate the empirical spatio-temporal kernel of the model and investigate the physical origins of triggering. Our analysis of the focal mechanisms implies that the occurrence

  4. Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.

    2011-01-01

    On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.

  5. Damaging earthquakes: A scientific laboratory

    USGS Publications Warehouse

    Hays, Walter W.; ,

    1996-01-01

    This paper reviews the principal lessons learned from multidisciplinary postearthquake investigations of damaging earthquakes throughout the world during the past 15 years. The unique laboratory provided by a damaging earthquake in culturally different but tectonically similar regions of the world has increased fundamental understanding of earthquake processes, added perishable scientific, technical, and socioeconomic data to the knowledge base, and led to changes in public policies and professional practices for earthquake loss reduction.

  6. In the shadow of 1857-the effect of the great Ft. Tejon earthquake on subsequent earthquakes in southern California

    USGS Publications Warehouse

    Harris, R.A.; Simpson, R.W.

    1996-01-01

    The great 1857 Fort Tejon earthquake is the largest earthquake to have hit southern California during the historic period. We investigated if seismicity patterns following 1857 could be due to static stress changes generated by the 1857 earthquake. When post-1857 earthquakes with unknown focal mechanisms were assigned strike-slip mechanisms with strike and rake determined by the nearest active fault, 13 of the 13 southern California M???5.5 earthquakes between 1857 and 1907 were encouraged by the 1857 rupture. When post-1857 earthquakes in the Transverse Ranges with unknown focal mechanisms were assigned reverse mechanisms and all other events were assumed strike-slip, 11 of the 13 earthquakes were encouraged by the 1857 earthquake. These results show significant correlations between static stress changes and seismicity patterns. The correlation disappears around 1907, suggesting that tectonic loading began to overwhelm the effect of the 1857 earthquake early in the 20th century.

  7. A note on evaluating VAN earthquake predictions

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-Akis; Melis, Nicos S.

    The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

  8. Rupture process of the M 7.9 Denali fault, Alaska, earthquake: Subevents, directivity, and scaling of high-frequency ground motions

    USGS Publications Warehouse

    Frankel, A.

    2004-01-01

    Displacement waveforms and high-frequency acceleration envelopes from stations at distances of 3-300 km were inverted to determine the source process of the M 7.9 Denali fault earthquake. Fitting the initial portion of the displacement waveforms indicates that the earthquake started with an oblique thrust subevent (subevent # 1) with an east-west-striking, north-dipping nodal plane consistent with the observed surface rupture on the Susitna Glacier fault. Inversion of the remainder of the waveforms (0.02-0.5 Hz) for moment release along the Denali and Totschunda faults shows that rupture proceeded eastward on the Denali fault, with two strike-slip subevents (numbers 2 and 3) centered about 90 and 210 km east of the hypocenter. Subevent 2 was located across from the station at PS 10 (Trans-Alaska Pipeline Pump Station #10) and was very localized in space and time. Subevent 3 extended from 160 to 230 km east of the hypocenter and had the largest moment of the subevents. Based on the timing between subevent 2 and the east end of subevent 3, an average rupture velocity of 3.5 km/sec, close to the shear wave velocity at the average rupture depth, was found. However, the portion of the rupture 130-220 km east of the epicenter appears to have an effective rupture velocity of about 5.0 km/ sec, which is supershear. These two subevents correspond approximately to areas of large surface offsets observed after the earthquake. Using waveforms of the M 6.7 Nenana Mountain earthquake as empirical Green's functions, the high-frequency (1-10 Hz) envelopes of the M 7.9 earthquake were inverted to determine the location of high-frequency energy release along the faults. The initial thrust subevent produced the largest high-frequency energy release per unit fault length. The high-frequency envelopes and acceleration spectra (>0.5 Hz) of the M 7.9 earthquake can be simulated by chaining together rupture zones of the M 6.7 earthquake over distances from 30 to 180 km east of the

  9. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  10. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, Lian; Li, Haibing; Brodsky, Emily

    2013-04-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (~200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ~ 30o, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  11. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding

  12. Causal mechanisms of seismo-EM phenomena during the 1965-1967 Matsushiro earthquake swarm.

    PubMed

    Enomoto, Yuji; Yamabe, Tsuneaki; Okumura, Nobuo

    2017-03-21

    The 1965-1967 Matsushiro earthquake swarm in central Japan exhibited two unique characteristics. The first was a hydro-mechanical crust rupture resulting from degassing, volume expansion of CO 2 /water, and a crack opening within the critically stressed crust under a strike-slip stress. The other was, despite the lower total seismic energy, the occurrence of complexed seismo-electromagnetic (seismo-EM) phenomena of the geomagnetic intensity increase, unusual earthquake lights (EQLs) and atmospheric electric field (AEF) variations. Although the basic rupture process of this swarm of earthquakes is reasonably understood in terms of hydro-mechanical crust rupture, the associated seismo-EM processes remain largely unexplained. Here, we describe a series of seismo-EM mechanisms involved in the hydro-mechanical rupture process, as observed by coupling the electric interaction of rock rupture with CO 2 gas and the dielectric-barrier discharge of the modelled fields in laboratory experiments. We found that CO 2 gases passing through the newly created fracture surface of the rock were electrified to generate pressure-impressed current/electric dipoles, which could induce a magnetic field following Biot-Savart's law, decrease the atmospheric electric field and generate dielectric-barrier discharge lightning affected by the coupling effect between the seismic and meteorological activities.

  13. Causal mechanisms of seismo-EM phenomena during the 1965-1967 Matsushiro earthquake swarm

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Yamabe, Tsuneaki; Okumura, Nobuo

    2017-03-01

    The 1965-1967 Matsushiro earthquake swarm in central Japan exhibited two unique characteristics. The first was a hydro-mechanical crust rupture resulting from degassing, volume expansion of CO2/water, and a crack opening within the critically stressed crust under a strike-slip stress. The other was, despite the lower total seismic energy, the occurrence of complexed seismo-electromagnetic (seismo-EM) phenomena of the geomagnetic intensity increase, unusual earthquake lights (EQLs) and atmospheric electric field (AEF) variations. Although the basic rupture process of this swarm of earthquakes is reasonably understood in terms of hydro-mechanical crust rupture, the associated seismo-EM processes remain largely unexplained. Here, we describe a series of seismo-EM mechanisms involved in the hydro-mechanical rupture process, as observed by coupling the electric interaction of rock rupture with CO2 gas and the dielectric-barrier discharge of the modelled fields in laboratory experiments. We found that CO2 gases passing through the newly created fracture surface of the rock were electrified to generate pressure-impressed current/electric dipoles, which could induce a magnetic field following Biot-Savart’s law, decrease the atmospheric electric field and generate dielectric-barrier discharge lightning affected by the coupling effect between the seismic and meteorological activities.

  14. Earthquake supersite project in the Messina Straits area (EQUAMES)

    NASA Astrophysics Data System (ADS)

    Mattia, Mario; Chiarabba, Claudio; Dell'Acqua, Fabio; Faccenna, Claudio; Lanari, Riccardo; Matteuzzi, Francesco; Neri, Giancarlo; Patanè, Domenico; Polonia, Alina; Prati, Claudio; Tinti, Stefano; Zerbini, Susanna

    2015-04-01

    A new permanent supersite is going to be proposed to the GEO GSNL (Geohazard Supersites and National Laboratories) for the Messina Straits area (Italy). The justification for this new supersite can be found in its geological and geophysical features and in the exposure to strong earthquakes, also in the recent past (1908). The Messina Supersite infrastructure (EQUAMES: EarthQUAkes in the MEssina Straits) will host, and contribute to the collection of, large amounts of data, basic for the analysis of seismic hazard/risk in this high seismic risk area, including risk from earthquake-related processes such as submarine mass failures and tsunamis. In EQUAMES, data of different types will coexist with models and methods useful for their analysis/interpretation and with first-level products of analysis that can be of interest for different kinds of users. EQUAMES will help all the interested scientific and non-scientific subjects to find and use data and to increase inter-institutional cooperation by addressing the following main topics in the Messina Straits area: • investigation of the geological and physical processes leading to the earthquake preparation and generation; • analysis of seismic shaking at ground (expected and observed); • combination of seismic hazard with vulnerability and exposure data for risk estimates; • analysis of tsunami generation, propagation and coastal inundation deriving from earthquake occurrence also through landslides due to instability conditions of subaerial and submarine slopes; • overall risk associated to earthquake activity in the Supersite area including the different types of cascade effects Many Italian and international Institutions have shown an effective interest in this project where a large variety of geophysical and geological in-situ data will be collected and where the INGV has the leading role with its large infrastructure of seismic, GPS and geochemical permanent stations. The groups supporting EQUAMES

  15. Fault lubrication during earthquakes.

    PubMed

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  16. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  17. Non-double-couple earthquakes. 1. Theory

    USGS Publications Warehouse

    Julian, B.R.; Miller, A.D.; Foulger, G.R.

    1998-01-01

    Historically, most quantitative seismological analyses have been based on the assumption that earthquakes are caused by shear faulting, for which the equivalent force system in an isotropic medium is a pair of force couples with no net torque (a 'double couple,' or DC). Observations of increasing quality and coverage, however, now resolve departures from the DC model for many earthquakes and find some earthquakes, especially in volcanic and geothermal areas, that have strongly non-DC mechanisms. Understanding non-DC earthquakes is important both for studying the process of faulting in detail and for identifying nonshear-faulting processes that apparently occur in some earthquakes. This paper summarizes the theory of 'moment tensor' expansions of equivalent-force systems and analyzes many possible physical non-DC earthquake processes. Contrary to long-standing assumption, sources within the Earth can sometimes have net force and torque components, described by first-rank and asymmetric second-rank moment tensors, which must be included in analyses of landslides and some volcanic phenomena. Non-DC processes that lead to conventional (symmetric second-rank) moment tensors include geometrically complex shear faulting, tensile faulting, shear faulting in an anisotropic medium, shear faulting in a heterogeneous region (e.g., near an interface), and polymorphic phase transformations. Undoubtedly, many non-DC earthquake processes remain to be discovered. Progress will be facilitated by experimental studies that use wave amplitudes, amplitude ratios, and complete waveforms in addition to wave polarities and thus avoid arbitrary assumptions such as the absence of volume changes or the temporal similarity of different moment tensor components.

  18. Large-scale unloading processes preceding the 2015 Mw 8.4 Illapel, Chile earthquake

    NASA Astrophysics Data System (ADS)

    Huang, H.; Meng, L.

    2017-12-01

    Foreshocks and/or slow slip are observed to accelerate before some recent large earthquakes. However, it is still controversial regarding the universality of precursory signals and their value in hazard assessment or mitigation. On 16 September 2015, the Mw 8.4 Illapel earthquake ruptured a section of the subduction thrust on the west coast of central Chile. Small earthquakes are important in resolving possible precursors but are often incomplete in routine catalogs. Here, we employ the matched filter technique to recover the undocumented small events in a 4-years period before the Illapel mainshock. We augment the template dataset from Chilean Seismological Center (CSN) with previously found new repeating aftershocks in the study area. We detect a total of 17658 events in the 4-years period before the mainshock, 6.3 times more than the CSN catalog. The magnitudes of detected events are determined according to different magnitude-amplitude relations estimated at different stations. Among the enhanced catalog, 183 repeating earthquakes are identified before the mainshock. Repeating earthquakes are located at both the northern and southern sides of the principal coseismic slip zone. The seismicity and aseismic slip progressively accelerate in a small low-coupling area around the epicenter starting from 140 days before the mainshock. The acceleration leads to a M 5.3 event 36 days before the mainshock, then followed by a relative quiescence in both seismicity and slow slip until the mainshock. This may correspond to a slow aseismic nucleation phase after the slow-slip transient ends. In addition, to the north of the mainshock rupture area, the last aseismic-slip episode occurs within 175-95 days before the mainshock and accumulates the largest amount of slip in the observation period. The simultaneous occurrence of slow slip over a large area indicates a large-scale unloading process preceding the mainshock. In contrast, in a region 70-150 km south of the mainshock

  19. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    NASA Astrophysics Data System (ADS)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification

  20. A new perspective on the generation of the 2016 M6.7 Kaohsiung earthquake, southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Zhi

    2017-04-01

    In order to investigate the likely generation mechanism of the 2016 M6.7 Kaohsiung earthquake, a large number of high-quality travel times from P- and S-wave source-receiver pairs are used jointly in this study to invert three-dimensional (3-D) seismic velocity (Vp, Vs) and Poisson's ratio structures at high resolution. We also calculated crack density, saturate fracture, and bulk-sound velocity from our inverted Vp, Vs, and σgodels. In this way, multi-geophysical parameter imaging revealed that the 2016 Kaohsiung earthquake occurred along a distinctive edge portion exhibiting high-to-low variations in these parameters in both horizontal and vertical directions across the hypocenter. We consider that a slow velocity and high-σ body that has high ɛ and somewhat high ζ anomalies above the hypocenter under the Coastal Plain represents fluids contained in the young fold-and-thrust belt associated with the passive Asian continental margin in southwestern Taiwan. Intriguing, a continuous low Vp and Vs zone with high Poisson's ratio, crack density and saturate fracturegnomalies across the Laonung and Chishan faults is also clearly imaged in the northwestern upper crust beneath the Coastal Plain and Western Foothills as far as the southeastern lower crust under the Central Range. We therefore propose that this southeastern extending weakened zone was mainly the result of a fluid intrusion either from the young fold-and-thrust belt the shallow crust or the subducted Eurasian continental (EC) plate in the lower crust and uppermost mantle. We suggest that fluid intrusion into the upper Oligocene to Pleistocene shallow marine and clastic shelf units of the Eurasian continental crust and/or the relatively thin uppermost part of the transitional Pleistocene-Holocene foreland due to the subduction of the EC plate along the deformation front played a key role in earthquake generation in southwestern Taiwan. Such fluid penetration would reduce Vp, and Vs while increasing

  1. A new perspective on the generation of the 2016 M6.4 Meilung earthquake, southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2017-12-01

    In order to investigate the likely generation mechanism of the 2016 M6.4 Meilung earthquake, a large number of high-quality travel times from P- and S-wave source-receiver pairs are used jointly in this study to invert three-dimensional (3-D) seismic velocity (Vp, Vs) and Poisson's ratio structures at high resolution. We also calculated crack density, saturate fracture, and bulk-sound velocity from our inverted Vp, Vs, and s models. In this way, multi-geophysical parameter imaging revealed that the 2016 Meilung earthquake occurred along a distinctive edge portion exhibiting high-to-low variations in these parameters in both horizontal and vertical directions across the hypocenter. We consider that a slow velocity and high-Poisson ratio body that has high-crack density and somewhat high-saturate fracture anomalies above the hypocenter under the coastal plain represents fluids contained in the young fold-and-thrust belt relative to the passive Asian continental margin in southwestern Taiwan. Intriguing, a continuous low Vp and Vs zone with high Poisson ratio, crack density and saturate fracture anomalies across the Laonung and Chishan faults is also clearly imaged in the northwestern upper crust beneath the coastal plain and western foothills as far as the southeastern lower crust under the central range. We therefore propose that this southeastern extending weakened zone was mainly the result of a fluid intrusion either from the young fold-and-thrust belt associated with the passive Asian continental margin in the shallow crust or the subducted Eurasian continental (EC) plate in the lower crust and uppermost mantle. We suggest that fluid intrusion into the upper Oligocene to Pleistocene shallow marine and clastic shelf units of the Eurasian continental crust and/or the relatively thin uppermost part of the transitional Pleistocene-Holocene foreland due to the subduction of the EC plate along the deformation front played a key role in earthquake generation in

  2. Earthquake and Tsunami booklet based on two Indonesia earthquakes

    NASA Astrophysics Data System (ADS)

    Hayashi, Y.; Aci, M.

    2014-12-01

    Many destructive earthquakes occurred during the last decade in Indonesia. These experiences are very important precepts for the world people who live in earthquake and tsunami countries. We are collecting the testimonies of tsunami survivors to clarify successful evacuation process and to make clear the characteristic physical behaviors of tsunami near coast. We research 2 tsunami events, 2004 Indian Ocean tsunami and 2010 Mentawai slow earthquake tsunami. Many video and photographs were taken by people at some places in 2004 Indian ocean tsunami disaster; nevertheless these were few restricted points. We didn't know the tsunami behavior in another place. In this study, we tried to collect extensive information about tsunami behavior not only in many places but also wide time range after the strong shake. In Mentawai case, the earthquake occurred in night, so there are no impressive photos. To collect detail information about evacuation process from tsunamis, we contrived the interview method. This method contains making pictures of tsunami experience from the scene of victims' stories. In 2004 Aceh case, all survivors didn't know tsunami phenomena. Because there were no big earthquakes with tsunami for one hundred years in Sumatra region, public people had no knowledge about tsunami. This situation was highly improved in 2010 Mentawai case. TV programs and NGO or governmental public education programs about tsunami evacuation are widespread in Indonesia. Many people know about fundamental knowledge of earthquake and tsunami disasters. We made drill book based on victim's stories and painted impressive scene of 2 events. We used the drill book in disaster education event in school committee of west Java. About 80 % students and teachers evaluated that the contents of the drill book are useful for correct understanding.

  3. GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.

    2015-12-01

    The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.

  4. Source processes of industrially-induced earthquakes at the Geysers geothermal area, California

    USGS Publications Warehouse

    Ross, A.; Foulger, G.R.; Julian, B.R.

    1999-01-01

    Microearthquake activity at The Geysers geothermal area, California, mirrors the steam production rate, suggesting that the earthquakes are industrially induced. A 15-station network of digital, three-component seismic stations was operated for one month in 1991, and 3,900 earthquakes were recorded. Highly-accurate moment tensors were derived for 30 of the best recorded earthquakes by tracing rays through tomographically derived 3-D VP and VP / VS structures, and inverting P-and S-wave polarities and amplitude ratios. The orientations of the P-and T-axes are very scattered, suggesting that there is no strong, systematic deviatoric stress field in the reservoir, which could explain why the earthquakes are not large. Most of the events had significant non-double-couple (non-DC) components in their source mechanisms with volumetric components up to ???30% of the total moment. Explosive and implosive sources were observed in approximately equal numbers, and must be caused by cavity creation (or expansion) and collapse. It is likely that there is a causal relationship between these processes and fluid reinjection and steam withdrawal. Compensated linear vector dipole (CLVD) components were up to 100% of the deviatoric component. Combinations of opening cracks and shear faults cannot explain all the observations, and rapid fluid flow may also be involved. The pattern of non-DC failure at The Geysers contrasts with that of the Hengill-Grensdalur area in Iceland, a largely unexploited water-dominated field in an extensional stress regime. These differences are poorly understood but may be linked to the contrasting regional stress regimes and the industrial exploitation at The Geysers.

  5. Rapid changes in the electrical state of the 1999 Izmit earthquake rupture zone

    PubMed Central

    Honkura, Yoshimori; Oshiman, Naoto; Matsushima, Masaki; Barış, Şerif; Kemal Tunçer, Mustafa; Bülent Tank, Sabri; Çelik, Cengiz; Çiftçi, Elif Tolak

    2013-01-01

    Crustal fluids exist near fault zones, but their relation to the processes that generate earthquakes, including slow-slip events, is unclear. Fault-zone fluids are characterized by low electrical resistivity. Here we investigate the time-dependent crustal resistivity in the rupture area of the 1999 Mw 7.6 Izmit earthquake using electromagnetic data acquired at four sites before and after the earthquake. Most estimates of apparent resistivity in the frequency range of 0.05 to 2.0 Hz show abrupt co-seismic decreases on the order of tens of per cent. Data acquired at two sites 1 month after the Izmit earthquake indicate that the resistivity had already returned to pre-seismic levels. We interpret such changes as the pressure-induced transition between isolated and interconnected fluids. Some data show pre-seismic changes and this suggests that the transition is associated with foreshocks and slow-slip events before large earthquakes. PMID:23820970

  6. Geophysical advances triggered by 1964 Great Alaska Earthquake

    USGS Publications Warehouse

    Haeussler, Peter J.; Leith, William S.; Wald, David J.; Filson, John R.; Wolfe, Cecily; Applegate, David

    2014-01-01

    A little more than 50 years ago, on 27 March 1964, the Great Alaska earthquake and tsunami struck. At moment magnitude 9.2, this earthquake is notable as the largest in U.S. written history and as the second-largest ever recorded by instruments worldwide. But what resonates today are its impacts on the understanding of plate tectonics, tsunami generation, and earthquake history as well as on the development of national programs to reduce risk from earthquakes and tsunamis.

  7. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    NASA Astrophysics Data System (ADS)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  8. Earthquake damage orientation to infer seismic parameters in archaeological sites and historical earthquakes

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel

    2018-01-01

    Studies to provide information concerning seismic parameters and seismic sources of historical and archaeological seismic events are used to better evaluate the seismic hazard of a region. This is of especial interest when no surface rupture is recorded or the seismogenic fault cannot be identified. The orientation pattern of the earthquake damage (ED) (e.g., fallen columns, dropped key stones) that affected architectonic elements of cities after earthquakes has been traditionally used in historical and archaeoseismological studies to infer seismic parameters. However, in the literature depending on the authors, the parameters that can be obtained are contradictory (it has been proposed: the epicenter location, the orientation of the P-waves, the orientation of the compressional strain and the fault kinematics) and authors even question these relations with the earthquake damage. The earthquakes of Lorca in 2011, Christchurch in 2011 and Emilia Romagna in 2012 present an opportunity to measure systematically a large number and wide variety of earthquake damage in historical buildings (the same structures that are used in historical and archaeological studies). The damage pattern orientation has been compared with modern instrumental data, which is not possible in historical and archaeoseismological studies. From measurements and quantification of the orientation patterns in the studied earthquakes, it is observed that there is a systematic pattern of the earthquake damage orientation (EDO) in the proximity of the seismic source (fault trace) (<10 km). The EDO in these earthquakes is normal to the fault trend (±15°). This orientation can be generated by a pulse of motion that in the near fault region has a distinguishable acceleration normal to the fault due to the polarization of the S-waves. Therefore, the earthquake damage orientation could be used to estimate the seismogenic fault trend of historical earthquakes studies where no instrumental data are available.

  9. TriNet "ShakeMaps": Rapid generation of peak ground motion and intensity maps for earthquakes in southern California

    USGS Publications Warehouse

    Wald, D.J.; Quitoriano, V.; Heaton, T.H.; Kanamori, H.; Scrivner, C.W.; Worden, C.B.

    1999-01-01

    Rapid (3-5 minutes) generation of maps of instrumental ground-motion and shaking intensity is accomplished through advances in real-time seismographic data acquisition combined with newly developed relationships between recorded ground-motion parameters and expected shaking intensity values. Estimation of shaking over the entire regional extent of southern California is obtained by the spatial interpolation of the measured ground motions with geologically based frequency and amplitude-dependent site corrections. Production of the maps is automatic, triggered by any significant earthquake in southern California. Maps are now made available within several minutes of the earthquake for public and scientific consumption via the World Wide Web; they will be made available with dedicated communications for emergency response agencies and critical users.

  10. Electromagnetic Energy Released in the Subduction (Benioff) Zone in Weeks Previous to Earthquake Occurrence in Central Peru and the Estimation of Earthquake Magnitudes.

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Centa, V. A.; Bleier, T.

    2017-12-01

    During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event

  11. Deep focus earthquakes in the laboratory

    NASA Astrophysics Data System (ADS)

    Schubnel, Alexandre; Brunet, Fabrice; Hilairet, Nadège; Gasc, Julien; Wang, Yanbin; Green, Harry W., II

    2014-05-01

    While the existence of deep earthquakes have been known since the 1920's, the essential mechanical process responsible for them is still poorly understood and remained one of the outstanding unsolved problems of geophysics and rock mechanics. Indeed, deep focus earthquake occur in an environment fundamentally different from that of shallow (<100 km) earthquakes. As pressure and temperature increase with depth however, intra-crystalline plasticity starts to dominate the deformation regime so that rocks yield by plastic flow rather than by brittle fracturing. Olivine phase transitions have provided an attractive alternative mechanism for deep focus earthquakes. Indeed, the Earth mantle transition zone (410-700km) is the locus of the two successive polymorphic transitions of olivine. Such scenario, however, runs into the conceptual barrier of initiating failure in a pressure (P) and temperature (T) regime where deviatoric stress relaxation is expected to be achieved through plastic flow. Here, we performed laboratory deformation experiments on Germanium olivine (Mg2GeO4) under differential stress at high pressure (P=2-5GPa) and within a narrow temperature range (T=1000-1250K). We find that fractures nucleate at the onset of the olivine to spinel transition. These fractures propagate dynamically (i.e. at a non-negligible fraction of the shear wave velocity) so that intense acoustic emissions are generated. Similar to deep-focus earthquakes, these acoustic emissions arise from pure shear sources, and obey the Gutenberg-Richter law without following Omori's law. Microstructural observations prove that dynamic weakening likely involves superplasticity of the nanocrystalline spinel reaction product at seismic strain rates. Although in our experiments the absolute stress value remains high compared to stresses expected within the cold core of subducted slabs, the observed stress drops are broadly consistent with those calculated for deep earthquakes. Constant differential

  12. The U.S. Geological Survey's Earthquake Summary Posters: A GIS-based Education and Communication Product for Presenting Consolidated Post-Earthquake Information

    NASA Astrophysics Data System (ADS)

    Tarr, A.; Benz, H.; Earle, P.; Wald, D. J.

    2003-12-01

    Earthquake Summary Posters (ESP's), a new product of the U.S. Geological Survey's Earthquake Program, are produced at the National Earthquake Information Center (NEIC) in Golden. The posters consist of rapidly-generated, GIS-based maps made following significant earthquakes worldwide (typically M>7.0, or events of significant media/public interest). ESP's consolidate, in an attractive map format, a large-scale epicentral map, several auxiliary regional overviews (showing tectonic and geographical setting, seismic history, seismic hazard, and earthquake effects), depth sections (as appropriate), a table of regional earthquakes, and a summary of the reional seismic history and tectonics. The immediate availability of the latter text summaries has been facilitated by the availability of Rapid, Accurate Tectonic Summaries (RATS) produced at NEIC and posted on the web following significant events. The rapid production of ESP's has been facilitated by generating, during the past two years, regional templates for tectonic areas around the world by organizing the necessary spatially-referenced data for the map base and the thematic layers that overlay the base. These GIS databases enable scripted Arc Macro Language (AML) production of routine elements of the maps (for example background seismicity, tectonic features, and probabilistic hazard maps). However, other elements of the maps are earthquake-specific and are produced manually to reflect new data, earthquake effects, and special characteristics. By the end of this year, approximately 85% of the Earth's seismic zones will be covered for generating future ESP's. During the past year, 13 posters were completed, comparable to the yearly average expected for significant earthquakes. Each year, all ESPs will be published on a CD in PDF format as an Open-File Report. In addition, each is linked to the special event earthquake pages on the USGS Earthquake Program web site (http://earthquake.usgs.gov). Although three formats

  13. Generating functions and stability study of multivariate self-excited epidemic processes

    NASA Astrophysics Data System (ADS)

    Saichev, A. I.; Sornette, D.

    2011-09-01

    We present a stability study of the class of multivariate self-excited Hawkes point processes, that can model natural and social systems, including earthquakes, epileptic seizures and the dynamics of neuron assemblies, bursts of exchanges in social communities, interactions between Internet bloggers, bank network fragility and cascading of failures, national sovereign default contagion, and so on. We present the general theory of multivariate generating functions to derive the number of events over all generations of various types that are triggered by a mother event of a given type. We obtain the stability domains of various systems, as a function of the topological structure of the mutual excitations across different event types. We find that mutual triggering tends to provide a significant extension of the stability (or subcritical) domain compared with the case where event types are decoupled, that is, when an event of a given type can only trigger events of the same type.

  14. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    NASA Astrophysics Data System (ADS)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  15. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  16. Kinematic rupture process of the 2014 Chile Mw 8.1 earthquake constrained by strong-motion, GPS static offsets and teleseismic data

    NASA Astrophysics Data System (ADS)

    Liu, Chengli; Zheng, Yong; Wang, Rongjiang; Xiong, Xiong

    2015-08-01

    On 2014 April 1, a magnitude Mw 8.1 interplate thrust earthquake ruptured a densely instrumented region of Iquique seismic gap in northern Chile. The abundant data sets near and around the rupture zone provide a unique opportunity to study the detailed source process of this megathrust earthquake. We retrieved the spatial and temporal distributions of slip during the main shock and one strong aftershock through a joint inversion of teleseismic records, GPS offsets and strong motion data. The main shock rupture initiated at a focal depth of about 25 km and propagated around the hypocentre. The peak slip amplitude in the model is ˜6.5 m, located in the southeast of the hypocentre. The major slip patch is located around the hypocentre, spanning ˜150 km along dip and ˜160 km along strike. The associated static stress drop is ˜3 MPa. Most of the seismic moment was released within 150 s. The total seismic moment of our preferred model is 1.72 × 1021 N m, equivalent to Mw 8.1. For the strong aftershock on 2014 April 3, the slip mainly occurred in a relatively compact area, and the major slip area surrounded the hypocentre with the peak amplitude of ˜2.5 m. There is a secondary slip patch located downdip from the hypocentre with the peak slip of ˜2.1 m. The total seismic moment is about 3.9 × 1020 N m, equivalent to Mw 7.7. Between the rupture areas of the main shock and the 2007 November 14 Mw 7.7 Antofagasta, Chile earthquake, there is an earthquake vacant zone with a total length of about 150 km. Historically, if there is no big earthquake or obvious aseismic creep occurring in this area, it has a great potential of generating strong earthquakes with magnitude larger than Mw 7.0 in the future.

  17. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  18. On near-source earthquake triggering

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2009-01-01

    When one earthquake triggers others nearby, what connects them? Two processes are observed: static stress change from fault offset and dynamic stress changes from passing seismic waves. In the near-source region (r ??? 50 km for M ??? 5 sources) both processes may be operating, and since both mechanisms are expected to raise earthquake rates, it is difficult to isolate them. We thus compare explosions with earthquakes because only earthquakes cause significant static stress changes. We find that large explosions at the Nevada Test Site do not trigger earthquakes at rates comparable to similar magnitude earthquakes. Surface waves are associated with regional and long-range dynamic triggering, but we note that surface waves with low enough frequency to penetrate to depths where most aftershocks of the 1992 M = 5.7 Little Skull Mountain main shock occurred (???12 km) would not have developed significant amplitude within a 50-km radius. We therefore focus on the best candidate phases to cause local dynamic triggering, direct waves that pass through observed near-source aftershock clusters. We examine these phases, which arrived at the nearest (200-270 km) broadband station before the surface wave train and could thus be isolated for study. Direct comparison of spectral amplitudes of presurface wave arrivals shows that M ??? 5 explosions and earthquakes deliver the same peak dynamic stresses into the near-source crust. We conclude that a static stress change model can readily explain observed aftershock patterns, whereas it is difficult to attribute near-source triggering to a dynamic process because of the dearth of aftershocks near large explosions.

  19. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  20. Orientation damage in the Christchurch cemeteries generated during the Christchurch earthquakes of 2010

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel; Perez-Lopez, Raul; Rodrigez-Pascua, Miguel Angel; Martin-Velazquez, Silvia

    2014-05-01

    The intensity scales determined the damage caused by an earthquake. However, a new methodology takes into account not only the damage but the type of damage "Earthquake Archaeological Effects" EAE's, and its orientation (e.g. displaced masonry blocks, impact marks, conjugated fractures, fallen and oriented columns, dipping broken corners, etc.). It focuses not only on the amount of damage but also in its orientation, giving information about the ground motion during the earthquake. In 2010 an earthquake of magnitude 6.2 took place in Christchurch (New Zealand) (22-2-2010), 185 casualties, making it the second-deadliest natural disaster in New Zealand. Due to the magnitude of the catastrophe, the city centre (CBD) was closed and the most damaged buildings were closed and later demolished. For this reason it could not be possible to access to sampling or make observations in the most damaged areas. However, the cemeteries were not closed and a year later still remained intact since the financial means to recover were used to reconstruct infrastructures and housing the city. This peculiarity of the cemeteries made measures of the earthquake effects possible. Orientation damage was measured on the tombs, crosses and headstones of the cemeteries (mainly on falling objects such as fallen crosses, obelisks, displaced tombstones, etc.). 140 data were taken in the most important cemeteries (Barbadoes, Addington, Pebleton, Woodston, Broomley and Linwood cemeteries) covering much of the city area. The procedure involved two main phases: a) inventory and identification of damages, and b) analysis of the damage orientations. The orientation was calculated for each element and plotted in a map and statistically in rose diagrams. The orientation dispersion is high in some cemeteries but damage orientation S-N and E-W is observed. However, due to the multiple seismogenic faults responsible for earthquakes and damages in Christchurch during the year after the 2010 earthquake, a

  1. The physics of an earthquake

    NASA Astrophysics Data System (ADS)

    McCloskey, John

    2008-03-01

    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  2. Diverse rupture processes in the 2015 Peru deep earthquake doublet.

    PubMed

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo; Zhan, Zhongwen; Duputel, Zacharie

    2016-06-01

    Earthquakes in deeply subducted oceanic lithosphere can involve either brittle or dissipative ruptures. On 24 November 2015, two deep (606 and 622 km) magnitude 7.5 and 7.6 earthquakes occurred 316 s and 55 km apart. The first event (E1) was a brittle rupture with a sequence of comparable-size subevents extending unilaterally ~50 km southward with a rupture speed of ~4.5 km/s. This earthquake triggered several aftershocks to the north along with the other major event (E2), which had 40% larger seismic moment and the same duration (~20 s), but much smaller rupture area and lower rupture speed than E1, indicating a more dissipative rupture. A minor energy release ~12 s after E1 near the E2 hypocenter, possibly initiated by the S wave from E1, and a clear aftershock ~165 s after E1 also near the E2 hypocenter, suggest that E2 was likely dynamically triggered. Differences in deep earthquake rupture behavior are commonly attributed to variations in thermal state between subduction zones. However, the marked difference in rupture behavior of the nearby Peru doublet events suggests that local variations of stress state and material properties significantly contribute to diverse behavior of deep earthquakes.

  3. Low magnitude earthquakes generating significant subsidence: the Lunigiana case study

    NASA Astrophysics Data System (ADS)

    Samsonov, S. V.; Polcari, M.; Melini, D.; Cannelli, V.; Moro, M.; Bignami, C.; Saroli, M.; Vannoli, P.; Stramondo, S.

    2013-12-01

    We applied the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique to investigate and measure surface displacements due to the ML 5.2, June 21, 2013, earthquake occurred in the Apuan Alps (NW Italy) at a depth of about 5 km. The Centroid Moment Tensor (CMT) solution from INGV indicates an almost pure normal fault mechanism. Two differential interferograms showing the coseismic displacement were generated using X- band and C-band data respectively. The X-Band interferogram was obtained from a Cosmo-SkyMed ascending pair (azimuth -7.9° and incidence angle 40°) with a time interval of one day (June 21 - June 22) and 139 m spatial baseline, covering an area of about 40x40 km around the epicenter. The topographic phase component was removed using the 90 m SRTM DEM. The C-Band interferferogram was computed from two RADARSAT-2 Standard-3 (S3) images, characterized by 24 days temporal and 69 m spatial baselines, acquired on June 18 and July 12, 2013 on ascending orbit (azimuth -10.8°) with an incidence angle of 34° and covering 100x100 km area around the epicenter. The topographic phase component was removed using 30 m ASTER DEM. Adaptive filtering, phase unwrapping with Minimum Cost Flow (MCF) algorithm and orbital refinement were also applied to both interferograms. We modeled the observed SAR deformation fields using the Okada analytical formulation within a nonlinear inversion scheme, and found them to be consistent with a fault plane dipping towards NW at an angle of about 45°. In spite of the small magnitude, this earthquake produces a surface subsidence of about 1.5 cm in the Line-Of-Sight (LOS) direction, corresponding to about 3 cm along the vertical axis, that can be observed in both interferograms and appears consistent with the normal fault mechanisms.

  4. Causal mechanisms of seismo-EM phenomena during the 1965–1967 Matsushiro earthquake swarm

    PubMed Central

    Enomoto, Yuji; Yamabe, Tsuneaki; Okumura, Nobuo

    2017-01-01

    The 1965–1967 Matsushiro earthquake swarm in central Japan exhibited two unique characteristics. The first was a hydro-mechanical crust rupture resulting from degassing, volume expansion of CO2/water, and a crack opening within the critically stressed crust under a strike-slip stress. The other was, despite the lower total seismic energy, the occurrence of complexed seismo-electromagnetic (seismo-EM) phenomena of the geomagnetic intensity increase, unusual earthquake lights (EQLs) and atmospheric electric field (AEF) variations. Although the basic rupture process of this swarm of earthquakes is reasonably understood in terms of hydro-mechanical crust rupture, the associated seismo-EM processes remain largely unexplained. Here, we describe a series of seismo-EM mechanisms involved in the hydro-mechanical rupture process, as observed by coupling the electric interaction of rock rupture with CO2 gas and the dielectric-barrier discharge of the modelled fields in laboratory experiments. We found that CO2 gases passing through the newly created fracture surface of the rock were electrified to generate pressure-impressed current/electric dipoles, which could induce a magnetic field following Biot-Savart’s law, decrease the atmospheric electric field and generate dielectric-barrier discharge lightning affected by the coupling effect between the seismic and meteorological activities. PMID:28322263

  5. Pre-Earthquake Unipolar Electromagnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Freund, F.

    2013-12-01

    Transient ultralow frequency (ULF) electromagnetic (EM) emissions have been reported to occur before earthquakes [1,2]. They suggest powerful transient electric currents flowing deep in the crust [3,4]. Prior to the M=5.4 Alum Rock earthquake of Oct. 21, 2007 in California a QuakeFinder triaxial search-coil magnetometer located about 2 km from the epicenter recorded unusual unipolar pulses with the approximate shape of a half-cycle of a sine wave, reaching amplitudes up to 30 nT. The number of these unipolar pulses increased as the day of the earthquake approached. These pulses clearly originated around the hypocenter. The same pulses have since been recorded prior to several medium to moderate earthquakes in Peru, where they have been used to triangulate the location of the impending earthquakes [5]. To understand the mechanism of the unipolar pulses, we first have to address the question how single current pulses can be generated deep in the Earth's crust. Key to this question appears to be the break-up of peroxy defects in the rocks in the hypocenter as a result of the increase in tectonic stresses prior to an earthquake. We investigate the mechanism of the unipolar pulses by coupling the drift-diffusion model of semiconductor theory to Maxwell's equations, thereby producing a model describing the rock volume that generates the pulses in terms of electromagnetism and semiconductor physics. The system of equations is then solved numerically to explore the electromagnetic radiation associated with drift-diffusion currents of electron-hole pairs. [1] Sharma, A. K., P. A. V., and R. N. Haridas (2011), Investigation of ULF magnetic anomaly before moderate earthquakes, Exploration Geophysics 43, 36-46. [2] Hayakawa, M., Y. Hobara, K. Ohta, and K. Hattori (2011), The ultra-low-frequency magnetic disturbances associated with earthquakes, Earthquake Science, 24, 523-534. [3] Bortnik, J., T. E. Bleier, C. Dunson, and F. Freund (2010), Estimating the seismotelluric current

  6. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  7. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  8. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of

  9. Post-earthquake coastal evolution and recovery of an embayed beach in central-southern Chile

    NASA Astrophysics Data System (ADS)

    Martínez, Carolina; Rojas, Daniel; Quezada, Matías; Quezada, Jorge; Oliva, Ricardo

    2015-12-01

    Earthquakes and tsunamis are significant factors for change along active margin shores, and influence coastal evolution. The Chilean coast was affected in 2010 by a subduction earthquake with a magnitude of Mw 8.8 and also by a trans-Pacific tsunami, which generated violent geomorphologic changes and damaged homes. Following these events, the magnitude of the changes which affect Chile's central-southern coast (37°S) and the role of subduction earthquakes in coastal evolution on a historical scale were investigated. At Lebu bay (an embayed beach) data were generated for variations in time and space along the shoreline, topographical and bathymetric changes in the bay, and for morphodynamic littoral processes. Logarithmic and parabolic models were applied to the shoreline along with map overlays in order to determine changes. The shoreline processes were analyzed based on statistics for waves, tides and sediment transport for pre- and post-tsunami conditions. An average accretion rate of 2.80 m/year (1984-2010) was established for the shoreline, with a strong trend towards accretion in the last 30 years. A parabolic function best represented the general form of the shoreline, although the presence of a river in the concave zone affected the fit in this sector. Two factors controlled historical changes on the beach: one of anthropic origin in addition to the earthquake and tsunami on February 27th, 2010. The post-earthquake recovery was fast, and currently the beach is in a stable condition despite the inter-seismic subsidence process previous to the event. This coastal system showed a high resilience in the face of coastal geomorphological changes induced by high-impact natural disturbances. However, the opposite occurred in relation to changes induced by anthropogenic disturbances.

  10. Correlation between elastic energy density and deep earthquakes distribution

    NASA Astrophysics Data System (ADS)

    Gunawardana, P. M.; Morra, G.

    2017-05-01

    The mechanism at the origin of the earthquakes below 30 km remains elusive as these events cannot be explained by brittle frictional processes. In this work we focus on the global total distribution of earthquakes frequency vs. depth from ∼50 km to 670 km depth. We develop a numerical model of self-driven subduction by solving the non-homogeneous Stokes equation using the ;Particle in cell method; in combination with a conservative finite difference scheme, here solved for the first time using Python and NumPy only. We show that most of the elastic energy is stored in the slab core and that it is strongly correlated with the earthquake frequency-depth distribution for a wide range of lithosphere and lithosphere-core viscosities. According to our results, we suggest that 1) slab bending at the bottom of the upper mantle causes the peak of the earthquake frequency-depth distribution that is observed at mantle transition depth; 2) the presence of a high viscous stiff core inside the lithosphere generates an elastic energy distribution that fits better with the exponential decay that is observed at intermediate depth.

  11. HF radar detection of infrasonic waves generated in the ionosphere by the 28 March 2005 Sumatra earthquake

    NASA Astrophysics Data System (ADS)

    Bourdillon, Alain; Occhipinti, Giovanni; Molinié, Jean-Philippe; Rannou, Véronique

    2014-03-01

    Surface waves generated by earthquakes create atmospheric waves detectable in the ionosphere using radio waves techniques: i.e., HF Doppler sounding, GPS and altimeter TEC measurements, as well as radar measurements. We present observations performed with the over-the-horizon (OTH) radar NOSTRADAMUS after the very strong earthquake (M=8.6) that occurred in Sumatra on March 28, 2005. An original method based on the analysis of the RTD (Range-Time-Doppler) image is suggested to identify the multi-chromatic ionospheric signature of the Rayleigh wave. The proposed method presents the advantage to preserve the information on the range variation and time evolution, and provides comprehensive results, as well as easy identification of the waves. In essence, a Burg algorithm of order 1 is proposed to compute the Doppler shift of the radar signal, resulting in sensitivity as good as obtained with higher orders. The multi-chromatic observation of the ionospheric signature of Rayleigh wave allows to extrapolate information coherent with the dispersion curve of Rayleigh waves, that is, we observe two components of the Rayleigh waves with estimated group velocities of 3.8 km/s and 3.6 km/s associated to 28 mHz (T~36 s) and 6.1 mHz (T~164 s) waves, respectively. Spectral analysis of the RTD image reveals anyway the presence of several oscillations at frequencies between 3 and 8 mHz clearly associated to the transfer of energy from the solid-Earth to the atmosphere, and nominally described by the normal modes theory for a complete planet with atmosphere. Oscillations at frequencies larger than 8 mHz are also observed in the spectrum but with smaller amplitudes. Particular attention is pointed out to normal modes 0S29 and 0S37 which are strongly involved in the coupling process. As the proposed method is frequency free, it could be used not only for detection of ionospheric perturbations induced by earthquakes, but also by other natural phenomena as well as volcanic explosions and

  12. Continuity of the West Napa–Franklin fault zone inferred from guided waves generated by earthquakes following the 24 August 2014 Mw 6.0 South Napa earthquake

    USGS Publications Warehouse

    Catchings, Rufus D.; Goldman, Mark R.; Li, Yong-Gang; Chan, Joanne

    2016-01-01

    We measure peak ground velocities from fault‐zone guided waves (FZGWs), generated by on‐fault earthquakes associated with the 24 August 2014 Mw 6.0 South Napa earthquake. The data were recorded on three arrays deployed across north and south of the 2014 surface rupture. The observed FZGWs indicate that the West Napa fault zone (WNFZ) and the Franklin fault (FF) are continuous in the subsurface for at least 75 km. Previously published potential‐field data indicate that the WNFZ extends northward to the Maacama fault (MF), and previous geologic mapping indicates that the FF extends southward to the Calaveras fault (CF); this suggests a total length of at least 110 km for the WNFZ–FF. Because the WNFZ–FF appears contiguous with the MF and CF, these faults apparently form a continuous Calaveras–Franklin–WNFZ–Maacama (CFWM) fault that is second only in length (∼300  km) to the San Andreas fault in the San Francisco Bay area. The long distances over which we observe FZGWs, coupled with their high amplitudes (2–10 times the S waves) suggest that strong shaking from large earthquakes on any part of the CFWM fault may cause far‐field amplified fault‐zone shaking. We interpret guided waves and seismicity cross sections to indicate multiple upper crustal splays of the WNFZ–FF, including a northward extension of the Southhampton fault, which may cause strong shaking in the Napa Valley and the Vallejo area. Based on travel times from each earthquake to each recording array, we estimate average P‐, S‐, and guided‐wave velocities within the WNFZ–FF (4.8–5.7, 2.2–3.2, and 1.1–2.8  km/s, respectively), with FZGW velocities ranging from 58% to 93% of the average S‐wave velocities.

  13. Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone

    NASA Astrophysics Data System (ADS)

    Xue, L.; Li, H.; Brodsky, E. E.; Wang, H.; Pei, J.

    2012-12-01

    Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (˜200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ˜30°, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties

  14. Charles Darwin's earthquake reports

    NASA Astrophysics Data System (ADS)

    Galiev, Shamil

    2010-05-01

    problems which began to discuss only during the last time. Earthquakes often precede volcanic eruptions. According to Darwin, the earthquake-induced shock may be a common mechanism of the simultaneous eruptions of the volcanoes separated by long distances. In particular, Darwin wrote that ‘… the elevation of many hundred square miles of territory near Concepcion is part of the same phenomenon, with that splashing up, if I may so call it, of volcanic matter through the orifices in the Cordillera at the moment of the shock;…'. According to Darwin the crust is a system where fractured zones, and zones of seismic and volcanic activities interact. Darwin formulated the task of considering together the processes studied now as seismology and volcanology. However the difficulties are such that the study of interactions between earthquakes and volcanoes began only recently and his works on this had relatively little impact on the development of geosciences. In this report, we discuss how the latest data on seismic and volcanic events support the Darwin's observations and ideas about the 1835 Chilean earthquake. The material from researchspace. auckland. ac. nz/handle/2292/4474 is used. We show how modern mechanical tests from impact engineering and simple experiments with weakly-cohesive materials also support his observations and ideas. On the other hand, we developed the mathematical theory of the earthquake-induced catastrophic wave phenomena. This theory allow to explain the most important aspects the Darwin's earthquake reports. This is achieved through the simplification of fundamental governing equations of considering problems to strongly-nonlinear wave equations. Solutions of these equations are constructed with the help of analytic and numerical techniques. The solutions can model different strongly-nonlinear wave phenomena which generate in a variety of physical context. A comparison with relevant experimental observations is also presented.

  15. Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise

    NASA Astrophysics Data System (ADS)

    Scott, Chelsea Phipps

    geometry and kinematics following the application of atmospheric corrections to an event spanned by real InSAR data, the 1992 M5.6 Little Skull Mountain, Nevada, earthquake. Finally, I discuss how the derived workflow could be applied to other tectonic problems, such as solving for interseismic strain accumulation rates in a subduction zone environment. I also study the evolution of the crustal stress field in the South American plate following two recent great earthquakes along the Nazca- South America subduction zone. I show that the 2010 Mw 8.8 Maule, Chile, earthquake very likely triggered several moderate magnitude earthquakes in the Andean volcanic arc and backarc. This suggests that great earthquakes modulate the crustal stress field outside of the immediate aftershock zone and that far-field faults may pose a heightened hazard following large subduction earthquakes. The 2014 Mw 8.1 Pisagua, Chile, earthquake reopened ancient surface cracks that have been preserved in the hyperarid forearc setting of northern Chile for thousands of earthquake cycles. The orientation of cracks reopened in this event reflects the static and likely dynamic stresses generated by the recent earthquake. Coseismic cracks serve as a reliable marker of permanent earthquake deformation and plate boundary behavior persistent over the million-year timescale. This work on great earthquakes suggests that InSAR observations can play a crucial role in furthering our understanding of the crustal mechanics that drive seismic cycle processes in subduction zones.

  16. Large earthquake rupture process variations on the Middle America megathrust

    NASA Astrophysics Data System (ADS)

    Ye, Lingling; Lay, Thorne; Kanamori, Hiroo

    2013-11-01

    The megathrust fault between the underthrusting Cocos plate and overriding Caribbean plate recently experienced three large ruptures: the August 27, 2012 (Mw 7.3) El Salvador; September 5, 2012 (Mw 7.6) Costa Rica; and November 7, 2012 (Mw 7.4) Guatemala earthquakes. All three events involve shallow-dipping thrust faulting on the plate boundary, but they had variable rupture processes. The El Salvador earthquake ruptured from about 4 to 20 km depth, with a relatively large centroid time of ˜19 s, low seismic moment-scaled energy release, and a depleted teleseismic short-period source spectrum similar to that of the September 2, 1992 (Mw 7.6) Nicaragua tsunami earthquake that ruptured the adjacent shallow portion of the plate boundary. The Costa Rica and Guatemala earthquakes had large slip in the depth range 15 to 30 km, and more typical teleseismic source spectra. Regional seismic recordings have higher short-period energy levels for the Costa Rica event relative to the El Salvador event, consistent with the teleseismic observations. A broadband regional waveform template correlation analysis is applied to categorize the focal mechanisms for larger aftershocks of the three events. Modeling of regional wave spectral ratios for clustered events with similar mechanisms indicates that interplate thrust events have corner frequencies, normalized by a reference model, that increase down-dip from anomalously low values near the Middle America trench. Relatively high corner frequencies are found for thrust events near Costa Rica; thus, variations along strike of the trench may also be important. Geodetic observations indicate trench-parallel motion of a forearc sliver extending from Costa Rica to Guatemala, and low seismic coupling on the megathrust has been inferred from a lack of boundary-perpendicular strain accumulation. The slip distributions and seismic radiation from the large regional thrust events indicate relatively strong seismic coupling near Nicoya, Costa

  17. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  18. Permeability, storage and hydraulic diffusivity controlled by earthquakes

    NASA Astrophysics Data System (ADS)

    Brodsky, E. E.; Fulton, P. M.; Xue, L.

    2016-12-01

    Earthquakes can increase permeability in fractured rocks. In the farfield, such permeability increases are attributed to seismic waves and can last for months after the initial earthquake. Laboratory studies suggest that unclogging of fractures by the transient flow driven by seismic waves is a viable mechanism. These dynamic permeability increases may contribute to permeability enhancement in the seismic clouds accompanying hydraulic fracking. Permeability enhancement by seismic waves could potentially be engineered and the experiments suggest the process will be most effective at a preferred frequency. We have recently observed similar processes inside active fault zones after major earthquakes. A borehole observatory in the fault that generated the M9.0 2011 Tohoku earthquake reveals a sequence of temperature pulses during the secondary aftershock sequence of an M7.3 aftershock. The pulses are attributed to fluid advection by a flow through a zone of transiently increased permeability. Directly after the M7.3 earthquake, the newly damaged fault zone is highly susceptible to further permeability enhancement, but ultimately heals within a month and becomes no longer as sensitive. The observation suggests that the newly damaged fault zone is more prone to fluid pulsing than would be expected based on the long-term permeability structure. Even longer term healing is seen inside the fault zone of the 2008 M7.9 Wenchuan earthquake. The competition between damage and healing (or clogging and unclogging) results in dynamically controlled permeability, storage and hydraulic diffusivity. Recent measurements of in situ fault zone architecture at the 1-10 meter scale suggest that active fault zones often have hydraulic diffusivities near 10-2 m2/s. This uniformity is true even within the damage zone of the San Andreas fault where permeability and storage increases balance each other to achieve this value of diffusivity over a 400 m wide region. We speculate that fault zones

  19. Earthquake source properties from pseudotachylite

    USGS Publications Warehouse

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow

  20. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    PubMed

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  1. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    PubMed Central

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  2. Eclogitization of the Subducted Oceanic Crust and Its Implications for the Mechanism of Slow Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Xinyang; Zhao, Dapeng; Suzuki, Haruhiko; Li, Jiabiao; Ruan, Aiguo

    2017-12-01

    The generating mechanism and process of slow earthquakes can help us to better understand the seismogenic process and the petrological evolution of the subduction system, but they are still not very clear. In this work we present robust P and S wave tomography and Poisson's ratio images of the subducting Philippine Sea Plate beneath the Kii peninsula in Southwest Japan. Our results clearly reveal the spatial extent and variation of a low-velocity and high Poisson's ratio layer which is interpreted as the remnant of the subducted oceanic crust. The low-velocity layer disappears at depths >50 km, which is attributed to crustal eclogitization and consumption of fluids. The crustal eclogitization and destruction of the impermeable seal play a key role in the generation of slow earthquakes. The Moho depth of the overlying plate is an important factor affecting the depth range of slow earthquakes in warm subduction zones due to the transition of interface permeability from low to high there. The possible mechanism of the deep slow earthquakes is the dehydrated oceanic crustal rupture and shear slip at the transition zone in response to the crustal eclogitization and the temporal stress/strain field. A potential cause of the slow event gap existing beneath easternmost Shikoku and the Kii channel is the premature rupture of the subducted oceanic crust due to the large tensional force.

  3. U.S. Tsunami Information technology (TIM) Modernization: Performance Assessment of Tsunamigenic Earthquake Discrimination System

    NASA Astrophysics Data System (ADS)

    Hagerty, M. T.; Lomax, A.; Hellman, S. B.; Whitmore, P.; Weinstein, S.; Hirshorn, B. F.; Knight, W. R.

    2015-12-01

    Tsunami warning centers must rapidly decide whether an earthquake is likely to generate a destructive tsunami in order to issue a tsunami warning quickly after a large event. For very large events (Mw > 8 or so), magnitude and location alone are sufficient to warrant an alert. However, for events of smaller magnitude (e.g., Mw ~ 7.5), particularly for so-called "tsunami earthquakes", magnitude alone is insufficient to issue an alert and other measurements must be rapidly made and used to assess tsunamigenic potential. The Tsunami Information technology Modernization (TIM) is a National Oceanic and Atmospheric Administration (NOAA) project to update and standardize the earthquake and tsunami monitoring systems currently employed at the U.S. Tsunami Warning Centers in Ewa Beach, Hawaii (PTWC) and Palmer, Alaska (NTWC). We (ISTI) are responsible for implementing the seismic monitoring components in this new system, including real-time seismic data collection and seismic processing. The seismic data processor includes a variety of methods aimed at real-time discrimination of tsunamigenic events, including: Mwp, Me, slowness (Theta), W-phase, mantle magnitude (Mm), array processing and finite-fault inversion. In addition, it contains the ability to designate earthquake scenarios and play the resulting synthetic seismograms through the processing system. Thus, it is also a convenient tool that integrates research and monitoring and may be used to calibrate and tune the real-time monitoring system. Here we show results of the automated processing system for a large dataset of subduction zone earthquakes containing recent tsunami earthquakes and we examine the accuracy of the various discrimation methods and discuss issues related to their successful real-time application.

  4. The exposure of Sydney (Australia) to earthquake-generated tsunamis, storms and sea level rise: a probabilistic multi-hazard approach

    PubMed Central

    Dall'Osso, F.; Dominey-Howes, D.; Moore, C.; Summerhayes, S.; Withycombe, G.

    2014-01-01

    Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney. PMID:25492514

  5. The exposure of Sydney (Australia) to earthquake-generated tsunamis, storms and sea level rise: a probabilistic multi-hazard approach.

    PubMed

    Dall'Osso, F; Dominey-Howes, D; Moore, C; Summerhayes, S; Withycombe, G

    2014-12-10

    Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney.

  6. Study of the characteristics of seismic signals generated by natural and cultural phenomena. [such as earthquakes, sonic booms, and nuclear explosions

    NASA Technical Reports Server (NTRS)

    Goforth, T. T.; Rasmussen, R. K.

    1974-01-01

    Seismic data recorded at the Tonto Forest Seismological Observatory in Arizona and the Uinta Basin Seismological Observatory in Utah were used to compare the frequency of occurrence, severity, and spectral content of ground motions resulting from earthquakes, and other natural and man-made sources with the motions generated by sonic booms. A search of data recorded at the two observatories yielded a classification of over 180,000 earthquake phase arrivals on the basis of frequency of occurrence versus maximum ground velocity. The majority of the large ground velocities were produced by seismic surface waves from moderate to large earthquakes in the western United States, and particularly along the Pacific Coast of the United States and northern Mexico. A visual analysis of raw film seismogram data over a 3-year period indicates that local and regional seismic events, including quarry blasts, are frequent in occurrence, but do not produce ground motions at the observatories comparable to either the large western United States earthquakes or to sonic booms. Seismic data from the Nevada Test Site nuclear blasts were used to derive magnitude-distance-sonic boom overpressure relations.

  7. Precise Relative Earthquake Depth Determination Using Array Processing Techniques

    NASA Astrophysics Data System (ADS)

    Florez, M. A.; Prieto, G. A.

    2014-12-01

    The mechanism for intermediate depth and deep earthquakes is still under debate. The temperatures and pressures are above the point where ordinary fractures ought to occur. Key to constraining this mechanism is the precise determination of hypocentral depth. It is well known that using depth phases allows for significant improvement in event depth determination, however routinely and systematically picking such phases for teleseismic or regional arrivals is problematic due to poor signal-to-noise ratios around the pP and sP phases. To overcome this limitation we have taken advantage of the additional information carried by seismic arrays. We have used beamforming and velocity spectral analysis techniques to precise measure pP-P and sP-P differential travel times. These techniques are further extended to achieve subsample accuracy and to allow for events where the signal-to-noise ratio is close to or even less than 1.0. The individual estimates obtained at different subarrays for a pair of earthquakes can be combined using a double-difference technique in order to precisely map seismicity in regions where it is tightly clustered. We illustrate these methods using data from the recent M 7.9 Alaska earthquake and its aftershocks, as well as data from the Bucaramanga nest in northern South America, arguably the densest and most active intermediate-depth earthquake nest in the world.

  8. The variation of the ground electric field associated with the Mei-Nung earthquake on Feb. 6, 2016

    NASA Astrophysics Data System (ADS)

    Bing-Chih Chen, Alfred; Yeh, Er-Chun; Chuang, Chia-Wen

    2017-04-01

    Recent studies show that a strong coupling exists between lithosphere, atmosphere and extending up to the ionosphere. Natural phenomena on the ground surface such as oceans variation, volcanic and seismic activities such as earthquakes, and lightning possibly generate significant impacts at ionosphere immediately by electrodynamic processes. The electric field near the ground is one of the potential quantities to explore this coupling process, especially caused by earthquake. Unfortunately, thunderstorm, dust storm or human activities also affect the measured electric field at ground. To investigate the feasibility of a network to monitor the variation of the ground electric field driven by the lightning and earthquake, a filed mill has been deployed in the NCKU campus since Dec. 2015, and luckily experienced the earthquake with a moment magnitude of 6.4 struck 28 km on 6 Feb. 2016. The recorded ground electric field deceased steadily since 1.5 days before the earthquake, and returned to normal level gradually. Moreover, this special feature can not be identified in the other period of the field test. The detail analysis is reported in this presentation.

  9. Response of a 14-story Anchorage, Alaska, building in 2002 to two close earthquakes and two distant Denali fault earthquakes

    USGS Publications Warehouse

    Celebi, M.

    2004-01-01

    The recorded responses of an Anchorage, Alaska, building during four significant earthquakes that occurred in 2002 are studied. Two earthquakes, including the 3 November 2002 M7.9 Denali fault earthquake, with epicenters approximately 275 km from the building, generated long trains of long-period (>1 s) surface waves. The other two smaller earthquakes occurred at subcrustal depths practically beneath Anchorage and produced higher frequency motions. These two pairs of earthquakes have different impacts on the response of the building. Higher modes are more pronounced in the building response during the smaller nearby events. The building responses indicate that the close-coupling of translational and torsional modes causes a significant beating effect. It is also possible that there is some resonance occurring due to the site frequency being close to the structural frequency. Identification of dynamic characteristics and behavior of buildings can provide important lessons for future earthquake-resistant designs and retrofit of existing buildings. ?? 2004, Earthquake Engineering Research Institute.

  10. Earthquake and submarine landslide tsunamis: how can we tell the difference? (Invited)

    NASA Astrophysics Data System (ADS)

    Tappin, D. R.; Grilli, S. T.; Harris, J.; Geller, R. J.; Masterlark, T.; Kirby, J. T.; Ma, G.; Shi, F.

    2013-12-01

    Several major recent events have shown the tsunami hazard from submarine mass failures (SMF), i.e., submarine landslides. In 1992 a small earthquake triggered landslide generated a tsunami over 25 meters high on Flores Island. In 1998 another small, earthquake-triggered, sediment slump-generated tsunami up to 15 meters high devastated the local coast of Papua New Guinea killing 2,200 people. It was this event that led to the recognition of the importance of marine geophysical data in mapping the architecture of seabed sediment failures that could be then used in modeling and validating the tsunami generating mechanism. Seabed mapping of the 2004 Indian Ocean earthquake rupture zone demonstrated, however, that large, if not great, earthquakes do not necessarily cause major seabed failures, but that along some convergent margins frequent earthquakes result in smaller sediment failures that are not tsunamigenic. Older events, such as Messina, 1908, Makran, 1945, Alaska, 1946, and Java, 2006, all have the characteristics of SMF tsunamis, but for these a SMF source has not been proven. When the 2011 tsunami struck Japan, it was generally assumed that it was directly generated by the earthquake. The earthquake has some unusual characteristics, such as a shallow rupture that is somewhat slow, but is not a 'tsunami earthquake.' A number of simulations of the tsunami based on an earthquake source have been published, but in general the best results are obtained by adjusting fault rupture models with tsunami wave gauge or other data so, to the extent that they can model the recorded tsunami data, this demonstrates self-consistency rather than validation. Here we consider some of the existing source models of the 2011 Japan event and present new tsunami simulations based on a combination of an earthquake source and an SMF mapped from offshore data. We show that the multi-source tsunami agrees well with available tide gauge data and field observations and the wave data from

  11. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  12. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  13. Geological process of the slow earthquakes -A hypothesis from an ancient plate boundary fault rock

    NASA Astrophysics Data System (ADS)

    Kitamura, Y.; Kimura, G.; Kawabata, K.

    2012-12-01

    We present an integrated model of the deformation along the subduction plate boundary from the trench to the seismogenic zone. Over years of field based research in the Shimanto Belt accretionary complex, southwest Japan, yielded breaking-through discoveries on plate boundary processes, for example, the first finding of pseudotachylyte in the accretionary prism (Ikesawa et al., 2003). Our aim here is to unveil the geological aspects of slow earthquakes and the related plate boundary processes. Studied tectonic mélanges in the Shimanto Belt are regarded as fossils of plate boundary fault zone in subduction zone. We traced material from different depths along subduction channel using samples from on-land outcrops and ocean drilling cores. As a result, a series of progressive deformation down to the down-dip limit of the seismogenic zone was revealed. Detailed geological survey and structural analyses enabled us to separate superimposed deformation events during subduction. Material involved in the plate boundary deformation is mainly an alternation of sand and mud. As they have different competency and are suffered by simple shear stress field, sandstones break apart in flowing mudstones. We distinguished several stages of these deformations in sandstones and recognized progress in the intensity of deformation with increment of underthrusting. It is also known that the studied Mugi mélange bears pseudotachylyte in its upper bounding fault. Our conclusion illustrates that the subduction channel around the depth of the seismogenic zone forms a thick plate boundary fault zone, where there is a clear segregation in deformation style: a fast and episodic slip at the upper boundary fault and a slow and continuous deformation within the zone. The former fast deformation corresponds to the plate boundary earthquakes and the latter to the slow earthquakes. We further examined numerically whether this plate boundary fault rock is capable of releasing seismic moment enough to

  14. Numerical study of tsunami generated by multiple submarine slope failures in Resurrection Bay, Alaska, during the MW 9.2 1964 earthquake

    USGS Publications Warehouse

    Suleimani, E.; Hansen, R.; Haeussler, Peter J.

    2009-01-01

    We use a viscous slide model of Jiang and LeBlond (1994) coupled with nonlinear shallow water equations to study tsunami waves in Resurrection Bay, in south-central Alaska. The town of Seward, located at the head of Resurrection Bay, was hit hard by both tectonic and local landslide-generated tsunami waves during the MW 9.2 1964 earthquake with an epicenter located about 150 km northeast of Seward. Recent studies have estimated the total volume of underwater slide material that moved in Resurrection Bay during the earthquake to be about 211 million m3. Resurrection Bay is a glacial fjord with large tidal ranges and sediments accumulating on steep underwater slopes at a high rate. Also, it is located in a seismically active region above the Aleutian megathrust. All these factors make the town vulnerable to locally generated waves produced by underwater slope failures. Therefore it is crucial to assess the tsunami hazard related to local landslide-generated tsunamis in Resurrection Bay in order to conduct comprehensive tsunami inundation mapping at Seward. We use numerical modeling to recreate the landslides and tsunami waves of the 1964 earthquake to test the hypothesis that the local tsunami in Resurrection Bay has been produced by a number of different slope failures. We find that numerical results are in good agreement with the observational data, and the model could be employed to evaluate landslide tsunami hazard in Alaska fjords for the purposes of tsunami hazard mitigation. ?? Birkh??user Verlag, Basel 2009.

  15. Mapping the rupture process of moderate earthquakes by inverting accelerograms

    USGS Publications Warehouse

    Hellweg, M.; Boatwright, J.

    1999-01-01

    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints. Copyright 1999 by the American Geophysical Union.

  16. Laboratory investigations of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Xia, Kaiwen

    In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.

  17. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  18. Tsunamigenic earthquake simulations using experimentally derived friction laws

    NASA Astrophysics Data System (ADS)

    Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.

    2018-03-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.

  19. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  20. Stress development in heterogenetic lithosphere: Insights into earthquake processes in the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Zhan, Yan; Hou, Guiting; Kusky, Timothy; Gregg, Patricia M.

    2016-03-01

    The New Madrid Seismic Zone (NMSZ) in the Midwestern United States was the site of several major M 6.8-8 earthquakes in 1811-1812, and remains seismically active. Although this region has been investigated extensively, the ultimate controls on earthquake initiation and the duration of the seismicity remain unclear. In this study, we develop a finite element model for the Central United States to conduct a series of numerical experiments with the goal of determining the impact of heterogeneity in the upper crust, the lower crust, and the mantle on earthquake nucleation and rupture processes. Regional seismic tomography data (CITE) are utilized to infer the viscosity structure of the lithosphere which provide an important input to the numerical models. Results indicate that when differential stresses build in the Central United States, the stresses accumulating beneath the Reelfoot Rift in the NMSZ are highly concentrated, whereas the stresses below the geologically similar Midcontinent Rift System are comparatively low. The numerical observations coincide with the observed distribution of seismicity throughout the region. By comparing the numerical results with three reference models, we argue that an extensive mantle low velocity zone beneath the NMSZ produces differential stress localization in the layers above. Furthermore, the relatively strong crust in this region, exhibited by high seismic velocities, enables the elevated stress to extend to the base of the ancient rift system, reactivating fossil rifting faults and therefore triggering earthquakes. These results show that, if boundary displacements are significant, the NMSZ is able to localize tectonic stresses, which may be released when faults close to failure are triggered by external processes such as melting of the Laurentide ice sheet or rapid river incision.

  1. Links Between Earthquake Characteristics and Subducting Plate Heterogeneity in the 2016 Pedernales Ecuador Earthquake Rupture Zone

    NASA Astrophysics Data System (ADS)

    Bai, L.; Mori, J. J.

    2016-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  2. Prospective Validation of Pre-earthquake Atmospheric Signals and Their Potential for Short–term Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Lee, Lou; Liu, Tiger; Kafatos, Menas

    2015-04-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. Our challenge question is: "Whether such pre-earthquake atmospheric/ionospheric signals are significant and could be useful for early warning of large earthquakes?" To check the predictive potential of atmospheric pre-earthquake signals we have started to validate anomalous ionospheric / atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (STIR), electron concentration in the ionosphere (GPS/TEC), radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 (2) Prospective testing of STIR anomalies with potential for M5.5+ events. The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show STIR anomalous behavior before all of these events with false negatives close to zero. False alarm ratio for false positives is less then 25%. The initial prospective testing for STIR shows systematic appearance of anomalies in advance (1-30 days) to the M5.5+ events for Taiwan, Kamchatka-Sakhalin (Russia) and Japan. Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be

  3. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  4. Combined effects of tectonic and landslide-generated Tsunami Runup at Seward, Alaska during the Mw 9.2 1964 earthquake

    USGS Publications Warehouse

    Suleimani, E.; Nicolsky, D.J.; Haeussler, Peter J.; Hansen, R.

    2011-01-01

    We apply a recently developed and validated numerical model of tsunami propagation and runup to study the inundation of Resurrection Bay and the town of Seward by the 1964 Alaska tsunami. Seward was hit by both tectonic and landslide-generated tsunami waves during the Mw 9.2 1964 mega thrust earthquake. The earthquake triggered a series of submarine mass failures around the fjord, which resulted in land sliding of part of the coastline into the water, along with the loss of the port facilities. These submarine mass failures generated local waves in the bay within 5 min of the beginning of strong ground motion. Recent studies estimate the total volume of underwater slide material that moved in Resurrection Bay to be about 211 million m3 (Haeussler et al. in Submarine mass movements and their consequences, pp 269-278, 2007). The first tectonic tsunami wave arrived in Resurrection Bay about 30 min after the main shock and was about the same height as the local landslide-generated waves. Our previous numerical study, which focused only on the local land slide generated waves in Resurrection Bay, demonstrated that they were produced by a number of different slope failures, and estimated relative contributions of different submarine slide complexes into tsunami amplitudes (Suleimani et al. in Pure Appl Geophys 166:131-152, 2009). This work extends the previous study by calculating tsunami inundation in Resurrection Bay caused by the combined impact of landslide-generated waves and the tectonic tsunami, and comparing the composite inundation area with observations. To simulate landslide tsunami runup in Seward, we use a viscous slide model of Jiang and LeBlond (J Phys Oceanogr 24(3):559-572, 1994) coupled with nonlinear shallow water equations. The input data set includes a high resolution multibeam bathymetry and LIDAR topography grid of Resurrection Bay, and an initial thickness of slide material based on pre- and post-earthquake bathymetry difference maps. For

  5. Incorporation of Multiple Datasets in Earthquake Source Inversions: Case Study for the 2015 Illapel Earthquake

    NASA Astrophysics Data System (ADS)

    Williamson, A.; Cummins, P. R.; Newman, A. V.; Benavente, R. F.

    2016-12-01

    The 2015 Illapel, Chile earthquake was recorded over a wide range of seismic, geodetic and oceanographic instruments. The USGS assigned magnitude 8.3 earthquake produced a tsunami that was recorded trans-oceanically at both tide gauges and deep-water tsunami pressure sensors. The event also generated surface deformation along the Chilean coast that was recovered through ascending and descending paths of the Sentinel-1A satellite. Additionally, seismic waves were recorded across various global seismic networks. While the determination of the rupture source through seismic and geodetic means is now commonplace and has been studied extensively in this fashion for the Illapel event, the use of tsunami datasets in the inversion process, rather than purely as a forward validation of models, is less common. In this study, we evaluate the use of both near and far field tsunami pressure gauges in the source inversion process, examining their contribution to seismic and geodetic joint inversions- as well as examine the contribution of dispersive and elastic loading parameters on the numerical tsunami propagation. We determine that the inclusion of near field tsunami pressure gauges assists in resolving the degree of slip in the near-trench environment, where purely geodetic inversions lose most resolvability. The inclusion of a far-field dataset has the potential to add further confidence to tsunami inversions, however at a high computational cost. When applied to the Illapel earthquake, this added near-trench resolvability leads to a better estimation of tsunami arrival times at near field gauges and contributes understanding to the wide variation in tsunamigenic slip present along the highly active Peru-Chile trench.

  6. From Geodesy to Tectonics: Observing Earthquake Processes from Space (Augustus Love Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Parsons, Barry

    2017-04-01

    A suite of powerful satellite-based techniques has been developed over the past two decades allowing us to measure and interpret variations in the deformation around active continental faults occurring in earthquakes, before the earthquakes as strain accumulates, and immediately following them. The techniques include radar interferometry and the measurement of vertical and horizontal surface displacements using very high-resolution (VHR) satellite imagery. They provide near-field measurements of earthquake deformation facilitating the association with the corresponding active faults and their topographic expression. The techniques also enable pre- and post-seismic deformation to be determined and hence allow the response of the fault and surrounding medium to changes in stress to be investigated. The talk illustrates both the techniques and the applications with examples from recent earthquakes. These include the 2013 Balochistan earthquake, a predominantly strike-slip event, that occurred on the arcuate Hoshab fault in the eastern Makran linking an area of mainly left-lateral shear in the east to one of shortening in the west. The difficulty of reconciling predominantly strike-slip motion with this shortening has led to a wide range of unconventional kinematic and dynamic models. Using pre-and post-seismic VHR satellite imagery, we are able to determine a 3-dimensional deformation field for the earthquake; Sentinel-1 interferometry shows an increase in the rate of creep on a creeping section bounding the northern end of the rupture in response to the earthquake. In addition, we will look at the 1978 Tabas earthquake for which no measurements of deformation were possible at the time. By combining pre-seismic 'spy' satellite images with modern imagery, and pre-seismic aerial stereo images with post-seismic satellite stereo images, we can determine vertical and horizontal displacements from the earthquake and subsequent post-seismic deformation. These observations

  7. Earthquake scenarios based on lessons from the past

    NASA Astrophysics Data System (ADS)

    Solakov, Dimcho; Simeonova, Stella; Aleksandrova, Irena; Popova, Iliana

    2010-05-01

    Earthquakes are the most deadly of the natural disasters affecting the human environment; indeed catastrophic earthquakes have marked the whole human history. Global seismic hazard and vulnerability to earthquakes are increasing steadily as urbanization and development occupy more areas that are prone to effects of strong earthquakes. Additionally, the uncontrolled growth of mega cities in highly seismic areas around the world is often associated with the construction of seismically unsafe buildings and infrastructures, and undertaken with an insufficient knowledge of the regional seismicity peculiarities and seismic hazard. The assessment of seismic hazard and generation of earthquake scenarios is the first link in the prevention chain and the first step in the evaluation of the seismic risk. The implementation of the earthquake scenarios into the policies for seismic risk reduction will allow focusing on the prevention of earthquake effects rather than on intervention following the disasters. The territory of Bulgaria (situated in the eastern part of the Balkan Peninsula) represents a typical example of high seismic risk area. Over the centuries, Bulgaria has experienced strong earthquakes. At the beginning of the 20-the century (from 1901 to 1928) five earthquakes with magnitude larger than or equal to MS=7.0 occurred in Bulgaria. However, no such large earthquakes occurred in Bulgaria since 1928, which may induce non-professionals to underestimate the earthquake risk. The 1986 earthquake of magnitude MS=5.7 occurred in the central northern Bulgaria (near the town of Strazhitsa) is the strongest quake after 1928. Moreover, the seismicity of the neighboring countries, like Greece, Turkey, former Yugoslavia and Romania (especially Vrancea-Romania intermediate earthquakes), influences the seismic hazard in Bulgaria. In the present study deterministic scenarios (expressed in seismic intensity) for two Bulgarian cities (Rouse and Plovdiv) are presented. The work on

  8. Earthquake Source Mechanics

    NASA Astrophysics Data System (ADS)

    The past 2 decades have seen substantial progress in our understanding of the nature of the earthquake faulting process, but increasingly, the subject has become an interdisciplinary one. Thus, although the observation of radiated seismic waves remains the primary tool for studying earthquakes (and has been increasingly focused on extracting the physical processes occurring in the “source”), geological studies have also begun to play a more important role in understanding the faulting process. Additionally, defining the physical underpinning for these phenomena has come to be an important subject in experimental and theoretical rock mechanics.In recognition of this, a Maurice Ewing Symposium was held at Arden House, Harriman, N.Y. (the former home of the great American statesman Averill Harriman), May 20-23, 1985. The purpose of the meeting was to bring together the international community of experimentalists, theoreticians, and observationalists who are engaged in the study of various aspects of earthquake source mechanics. The conference was attended by more than 60 scientists from nine countries (France, Italy, Japan, Poland, China, the United Kingdom, United States, Soviet Union, and the Federal Republic of Germany).

  9. Infrasound Signal Characteristics from Small Earthquakes

    DTIC Science & Technology

    2011-09-01

    INFRASOUND SIGNAL CHARACTERISTICS FROM SMALL EARTHQUAKES Stephen J. Arrowsmith1, J. Mark Hale2, Relu Burlacu2, Kristine L. Pankow2, Brian W. Stump3...ABSTRACT Physical insight into source properties that contribute to the generation of infrasound signals is critical to understanding the...m, with one element being co-located with a seismic station. One of the goals of this project is the recording of infrasound from earthquakes of

  10. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  11. The Road to Total Earthquake Safety

    NASA Astrophysics Data System (ADS)

    Frohlich, Cliff

    Cinna Lomnitz is possibly the most distinguished earthquake seismologist in all of Central and South America. Among many other credentials, Lomnitz has personally experienced the shaking and devastation that accompanied no fewer than five major earthquakes—Chile, 1939; Kern County, California, 1952; Chile, 1960; Caracas,Venezuela, 1967; and Mexico City, 1985. Thus he clearly has much to teach someone like myself, who has never even actually felt a real earthquake.What is this slim book? The Road to Total Earthquake Safety summarizes Lomnitz's May 1999 presentation at the Seventh Mallet-Milne Lecture, sponsored by the Society for Earthquake and Civil Engineering Dynamics. His arguments are motivated by the damage that occurred in three earthquakes—Mexico City, 1985; Loma Prieta, California, 1989; and Kobe, Japan, 1995. All three quakes occurred in regions where earthquakes are common. Yet in all three some of the worst damage occurred in structures located a significant distance from the epicenter and engineered specifically to resist earthquakes. Some of the damage also indicated that the structures failed because they had experienced considerable rotational or twisting motion. Clearly, Lomnitz argues, there must be fundamental flaws in the usually accepted models explaining how earthquakes generate strong motions, and how we should design resistant structures.

  12. Mega-earthquakes rupture flat megathrusts.

    PubMed

    Bletery, Quentin; Thomas, Amanda M; Rempel, Alan W; Karlstrom, Leif; Sladen, Anthony; De Barros, Louis

    2016-11-25

    The 2004 Sumatra-Andaman and 2011 Tohoku-Oki earthquakes highlighted gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution: A fast convergence rate and young buoyant lithosphere are not required to produce mega-earthquakes. We calculated the curvature along the major subduction zones of the world, showing that mega-earthquakes preferentially rupture flat (low-curvature) interfaces. A simplified analytic model demonstrates that heterogeneity in shear strength increases with curvature. Shear strength on flat megathrusts is more homogeneous, and hence more likely to be exceeded simultaneously over large areas, than on highly curved faults. Copyright © 2016, American Association for the Advancement of Science.

  13. A post-Tohoku earthquake review of earthquake probabilities in the Southern Kanto District, Japan

    NASA Astrophysics Data System (ADS)

    Somerville, Paul G.

    2014-12-01

    The 2011 Mw 9.0 Tohoku earthquake generated an aftershock sequence that affected a large part of northern Honshu, and has given rise to widely divergent forecasts of changes in earthquake occurrence probabilities in northern Honshu. The objective of this review is to assess these forecasts as they relate to potential changes in the occurrence probabilities of damaging earthquakes in the Kanto Region. It is generally agreed that the 2011 Mw 9.0 Tohoku earthquake increased the stress on faults in the southern Kanto district. Toda and Stein (Geophys Res Lett 686, 40: doi:10.1002, 2013) further conclude that the probability of earthquakes in the Kanto Corridor has increased by a factor of 2.5 for the time period 11 March 2013 to 10 March 2018 in the Kanto Corridor. Estimates of earthquake probabilities in a wider region of the Southern Kanto District by Nanjo et al. (Geophys J Int, doi:10.1093, 2013) indicate that any increase in the probability of earthquakes is insignificant in this larger region. Uchida et al. (Earth Planet Sci Lett 374: 81-91, 2013) conclude that the Philippine Sea plate the extends well north of the northern margin of Tokyo Bay, inconsistent with the Kanto Fragment hypothesis of Toda et al. (Nat Geosci, 1:1-6,2008), which attributes deep earthquakes in this region, which they term the Kanto Corridor, to a broken fragment of the Pacific plate. The results of Uchida and Matsuzawa (J Geophys Res 115:B07309, 2013)support the conclusion that fault creep in southern Kanto may be slowly relaxing the stress increase caused by the Tohoku earthquake without causing more large earthquakes. Stress transfer calculations indicate a large stress transfer to the Off Boso Segment as a result of the 2011 Tohoku earthquake. However, Ozawa et al. (J Geophys Res 117:B07404, 2012) used onshore GPS measurements to infer large post-Tohoku creep on the plate interface in the Off-Boso region, and Uchida and Matsuzawa (ibid.) measured similar large creep off the Boso

  14. Radiated energy and the rupture process of the Denali fault earthquake sequence of 2002 from broadband teleseismic body waves

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2004-01-01

    Displacement, velocity, and velocity-squared records of P and SH body waves recorded at teleseismic distances are analyzed to determine the rupture characteristics of the Denali fault, Alaska, earthquake of 3 November 2002 (MW 7.9, Me 8.1). Three episodes of rupture can be identified from broadband (???0.1-5.0 Hz) waveforms. The Denali fault earthquake started as a MW 7.3 thrust event. Subsequent right-lateral strike-slip rupture events with centroid depths of 9 km occurred about 22 and 49 sec later. The teleseismic P waves are dominated by energy at intermediate frequencies (0.1-1 Hz) radiated by the thrust event, while the SH waves are dominated by energy at lower frequencies (0.05-0.2 Hz) radiated by the strike-slip events. The strike-slip events exhibit strong directivity in the teleseismic SH waves. Correcting the recorded P-wave acceleration spectra for the effect of the free surface yields an estimate of 2.8 ?? 1015 N m for the energy radiated by the thrust event. Correcting the recorded SH-wave acceleration spectra similarly yields an estimate of 3.3 ?? 10 16 N m for the energy radiated by the two strike-slip events. The average rupture velocity for the strike-slip rupture process is 1.1??-1.2??. The strike-slip events were located 90 and 188 km east of the epicenter. The rupture length over which significant or resolvable energy is radiated is, thus, far shorter than the 340-km fault length over which surface displacements were observed. However, the seismic moment released by these three events, 4 ?? 1020 N m, was approximately half the seismic moment determined from very low-frequency analyses of the earthquake. The difference in seismic moment can be reasonably attributed to slip on fault segments that did not radiate significant or coherent seismic energy. These results suggest that very large and great strike-slip earthquakes can generate stress pulses that rapidly produce substantial slip with negligible stress drop and little discernible radiated

  15. Object-oriented microcomputer software for earthquake seismology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroeger, G.C.

    1993-02-01

    A suite of graphically interactive applications for the retrieval, editing and modeling of earthquake seismograms have been developed using object-orientation programming methodology and the C++ language. Retriever is an application which allows the user to search for, browse, and extract seismic data from CD-ROMs produced by the National Earthquake Information Center (NEIC). The user can restrict the date, size, location and depth of desired earthquakes and extract selected data into a variety of common seismic file formats. Reformer is an application that allows the user to edit seismic data and data headers, and perform a variety of signal processing operationsmore » on that data. Synthesizer is a program for the generation and analysis of teleseismic P and SH synthetic seismograms. The program provides graphical manipulation of source parameters, crustal structures and seismograms, as well as near real-time response in generating synthetics for arbitrary flat-layered crustal structures. All three applications use class libraries developed for implementing geologic and seismic objects and views. Standard seismogram view objects and objects that encapsulate the reading and writing of different seismic data file formats are shared by all three applications. The focal mechanism views in Synthesizer are based on a generic stereonet view object. Interaction with the native graphical user interface is encapsulated in a class library in order to simplify the porting of the software to different operating systems and application programming interfaces. The software was developed on the Apple Macintosh and is being ported to UNIX/X-Window platforms.« less

  16. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  17. Report on the Aseismic Slip, Tremor, and Earthquakes Workshop

    USGS Publications Warehouse

    Gomberg, Joan; Roeloffs, Evelyn; Trehu, Anne; Dragert, Herb; Meertens, Charles

    2008-01-01

    This report summarizes the discussions and information presented during the workshop on Aseismic Slip, Tremor, and Earthquakes. Workshop goals included improving coordination among those involved in conducting research related to these phenomena, assessing the implications for earthquake hazard assessment, and identifying ways to capitalize on the education and outreach opportunities presented by these phenomena. Research activities of focus included making, disseminating, and analyzing relevant measurements; the relationships among tremor, aseismic or 'slow-slip', and earthquakes; and discovering the underlying causative physical processes. More than 52 participants contributed to the workshop, held February 25-28, 2008 in Sidney, British Columbia. The workshop was sponsored by the U.S. Geological Survey, the National Science Foundation?s Earthscope Program and UNAVCO Consortium, and the Geological Survey of Canada. This report has five parts. In the first part, we integrate the information exchanged at the workshop as it relates to advancing our understanding of earthquake generation and hazard. In the second part, we summarize the ideas and concerns discussed in workshop working groups on Opportunities for Education and Outreach, Data and Instrumentation, User and Public Needs, and Research Coordination. The third part presents summaries of the oral presentations. The oral presentations are grouped as they were at the workshop in the categories of phenomenology, underlying physical processes, and implications for earthquake hazards. The fourth part contains the meeting program and the fifth part lists the workshop participants. References noted in parentheses refer to the authors of presentations made at the workshop, and published references are noted in square brackets and listed in the Reference section. Appendix A contains abstracts of all participant presentations and posters, which also have been posted online, along with presentations and author contact

  18. Probabilities of Earthquake Occurrences along the Sumatra-Andaman Subduction Zone

    NASA Astrophysics Data System (ADS)

    Pailoplee, Santi

    2017-03-01

    Earthquake activities along the Sumatra-Andaman Subduction Zone (SASZ) were clarified using the derived frequency-magnitude distribution in terms of the (i) most probable maximum magnitudes, (ii) return periods and (iii) probabilities of earthquake occurrences. The northern segment of SASZ, along the western coast of Myanmar to southern Nicobar, was found to be capable of generating an earthquake of magnitude 6.1-6.4 Mw in the next 30-50 years, whilst the southern segment of offshore of the northwestern and western parts of Sumatra (defined as a high hazard region) had a short recurrence interval of 6-12 and 10-30 years for a 6.0 and 7.0 Mw magnitude earthquake, respectively, compared to the other regions. Throughout the area along the SASZ, there are 70- almost 100% probabilities of the earthquake with Mw up to 6.0 might be generated in the next 50 years whilst the northern segment had less than 50% chance of occurrence of a 7.0 Mw earthquake in the next 50 year. Although Rangoon was defined as the lowest hazard among the major city in the vicinity of SASZ, there is 90% chance of a 6.0 Mw earthquake in the next 50 years. Therefore, the effective mitigation plan of seismic hazard should be contributed.

  19. Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan

    NASA Astrophysics Data System (ADS)

    Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H.

    2017-12-01

    An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding

  20. Accounting for orphaned aftershocks in the earthquake background rate

    USGS Publications Warehouse

    Van Der Elst, Nicholas

    2017-01-01

    Aftershocks often occur within cascades of triggered seismicity in which each generation of aftershocks triggers an additional generation, and so on. The rate of earthquakes in any particular generation follows Omori's law, going approximately as 1/t. This function decays rapidly, but is heavy-tailed, and aftershock sequences may persist for long times at a rate that is difficult to discriminate from background. It is likely that some apparently spontaneous earthquakes in the observational catalogue are orphaned aftershocks of long-past main shocks. To assess the relative proportion of orphaned aftershocks in the apparent background rate, I develop an extension of the ETAS model that explicitly includes the expected contribution of orphaned aftershocks to the apparent background rate. Applying this model to California, I find that the apparent background rate can be almost entirely attributed to orphaned aftershocks, depending on the assumed duration of an aftershock sequence. This implies an earthquake cascade with a branching ratio (the average number of directly triggered aftershocks per main shock) of nearly unity. In physical terms, this implies that very few earthquakes are completely isolated from the perturbing effects of other earthquakes within the fault system. Accounting for orphaned aftershocks in the ETAS model gives more accurate estimates of the true background rate, and more realistic expectations for long-term seismicity patterns.

  1. Accounting for orphaned aftershocks in the earthquake background rate

    NASA Astrophysics Data System (ADS)

    van der Elst, Nicholas J.

    2017-11-01

    Aftershocks often occur within cascades of triggered seismicity in which each generation of aftershocks triggers an additional generation, and so on. The rate of earthquakes in any particular generation follows Omori's law, going approximately as 1/t. This function decays rapidly, but is heavy-tailed, and aftershock sequences may persist for long times at a rate that is difficult to discriminate from background. It is likely that some apparently spontaneous earthquakes in the observational catalogue are orphaned aftershocks of long-past main shocks. To assess the relative proportion of orphaned aftershocks in the apparent background rate, I develop an extension of the ETAS model that explicitly includes the expected contribution of orphaned aftershocks to the apparent background rate. Applying this model to California, I find that the apparent background rate can be almost entirely attributed to orphaned aftershocks, depending on the assumed duration of an aftershock sequence. This implies an earthquake cascade with a branching ratio (the average number of directly triggered aftershocks per main shock) of nearly unity. In physical terms, this implies that very few earthquakes are completely isolated from the perturbing effects of other earthquakes within the fault system. Accounting for orphaned aftershocks in the ETAS model gives more accurate estimates of the true background rate, and more realistic expectations for long-term seismicity patterns.

  2. Triggering of repeating earthquakes in central California

    USGS Publications Warehouse

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul

    2014-01-01

    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  3. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  4. e-Science on Earthquake Disaster Mitigation by EUAsiaGrid

    NASA Astrophysics Data System (ADS)

    Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong

    2010-05-01

    Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the

  5. Earthquake experience interference effects in a modified Stroop task: an ERP study.

    PubMed

    Wei, Dongtao; Qiu, Jiang; Tu, Shen; Tian, Fang; Su, Yanhua; Luo, Yuejia

    2010-05-03

    The effects of the modified Stroop task on ERP were investigated in 20 subjects who had experienced the Sichuan earthquake and a matched control group. ERP data showed that Incongruent stimuli elicited a more negative ERP deflection (N300-450) than did Congruent stimuli between 300 and 450 ms post-stimulus in the earthquake group but not found in the control group, and the N300-450 might reflect conflict monitor (the information of color and meaning do not match) in the early phase of perception identification due to their sensitivity to the external stimulus. Then, Incongruent stimuli elicited a more negative ERP deflection than did Congruent stimuli between 450 and 650 ms post-stimulus in both the groups. Dipole source analysis showed that the N450-650 was mainly generated in the ACC contributed to this effect in the control group, which might be related to monitor and conflict resolution. However, in the earthquake group, the N450-650 was generated in the thalamus, which might be involved in inhibiting and compensating of the ACC which may be related to conflict resolution process. 2010 Elsevier Ireland Ltd. All rights reserved.

  6. High-frequency seismic signals associated with glacial earthquakes in Greenland

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2017-12-01

    Glacial earthquakes are magnitude 5 seismic events generated by iceberg calving at marine-terminating glaciers. They are characterized by teleseismically detectable signals at 35-150 seconds period that arise from the rotation and capsize of gigaton-sized icebergs (e.g., Ekström et al., 2003; Murray et al., 2015). Questions persist regarding the details of this calving process, including whether there are characteristic precursory events such as ice slumps or pervasive crevasse opening before an iceberg rotates away from the glacier. We investigate the high-frequency seismic signals produced before, during, and after glacial earthquakes. We analyze a set of 94 glacial earthquakes that occurred at three of Greenland's major glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier, from 2001 - 2013. We employ data from the GLISN network of broadband seismometers around Greenland and from short-term seismic deployments located close to the glaciers. These data are bandpass filtered to 3 - 10 Hz and trimmed to one-hour windows surrounding known glacial earthquakes. We observe elevated amplitudes of the 3 - 10 Hz signal for 500 - 1500 seconds spanning the time of each glacial earthquake. These durations are long compared to the 60 second glacial-earthquake source. In the majority of cases we observe an increase in the amplitude of the 3 - 10 Hz signal 200 - 600 seconds before the centroid time of the glacial earthquake and sustained high amplitudes for up to 800 seconds after. In some cases, high-amplitude energy in the 3 - 10 Hz band precedes elevated amplitudes in the 35 - 150 s band by 300 seconds. We explore possible causes for these high-frequency signals, and discuss implications for improving understanding of the glacial-earthquake source.

  7. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  8. Ground Motions Due to Earthquakes on Creeping Faults

    NASA Astrophysics Data System (ADS)

    Harris, R.; Abrahamson, N. A.

    2014-12-01

    We investigate the peak ground motions from the largest well-recorded earthquakes on creeping strike-slip faults in active-tectonic continental regions. Our goal is to evaluate if the strong ground motions from earthquakes on creeping faults are smaller than the strong ground motions from earthquakes on locked faults. Smaller ground motions might be expected from earthquakes on creeping faults if the fault sections that strongly radiate energy are surrounded by patches of fault that predominantly absorb energy. For our study we used the ground motion data available in the PEER NGA-West2 database, and the ground motion prediction equations that were developed from the PEER NGA-West2 dataset. We analyzed data for the eleven largest well-recorded creeping-fault earthquakes, that ranged in magnitude from M5.0-6.5. Our findings are that these earthquakes produced peak ground motions that are statistically indistinguishable from the peak ground motions produced by similar-magnitude earthquakes on locked faults. These findings may be implemented in earthquake hazard estimates for moderate-size earthquakes in creeping-fault regions. Further investigation is necessary to determine if this result will also apply to larger earthquakes on creeping faults. Please also see: Harris, R.A., and N.A. Abrahamson (2014), Strong ground motions generated by earthquakes on creeping faults, Geophysical Research Letters, vol. 41, doi:10.1002/2014GL060228.

  9. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  10. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  11. The effect of compliant prisms on subduction zone earthquakes and tsunamis

    NASA Astrophysics Data System (ADS)

    Lotto, Gabriel C.; Dunham, Eric M.; Jeppson, Tamara N.; Tobin, Harold J.

    2017-01-01

    Earthquakes generate tsunamis by coseismically deforming the seafloor, and that deformation is largely controlled by the shallow rupture process. Therefore, in order to better understand how earthquakes generate tsunamis, one must consider the material structure and frictional properties of the shallowest part of the subduction zone, where ruptures often encounter compliant sedimentary prisms. Compliant prisms have been associated with enhanced shallow slip, seafloor deformation, and tsunami heights, particularly in the context of tsunami earthquakes. To rigorously quantify the role compliant prisms play in generating tsunamis, we perform a series of numerical simulations that directly couple dynamic rupture on a dipping thrust fault to the elastodynamic response of the Earth and the acoustic response of the ocean. Gravity is included in our simulations in the context of a linearized Eulerian description of the ocean, which allows us to model tsunami generation and propagation, including dispersion and related nonhydrostatic effects. Our simulations span a three-dimensional parameter space of prism size, prism compliance, and sub-prism friction - specifically, the rate-and-state parameter b - a that determines velocity-weakening or velocity-strengthening behavior. We find that compliant prisms generally slow rupture velocity and, for larger prisms, generate tsunamis more efficiently than subduction zones without prisms. In most but not all cases, larger, more compliant prisms cause greater amounts of shallow slip and larger tsunamis. Furthermore, shallow friction is also quite important in determining overall slip; increasing sub-prism b - a enhances slip everywhere along the fault. Counterintuitively, we find that in simulations with large prisms and velocity-strengthening friction at the base of the prism, increasing prism compliance reduces rather than enhances shallow slip and tsunami wave height.

  12. Facilitation of intermediate-depth earthquakes by eclogitization-related stresses and H2O

    NASA Astrophysics Data System (ADS)

    Nakajima, J.; Uchida, N.; Hasegawa, A.; Shiina, T.; Hacker, B. R.; Kirby, S. H.

    2012-12-01

    Generation of intermediate-depth earthquakes is an ongoing enigma because high lithostatic pressures render ordinary dry frictional failure unlikely. A popular hypothesis to solve this conundrum is fluid-related embrittlement (e.g., Kirby et al., 1996; Preston et al., 2003), which is known to work even for dehydration reactions with negative volume change (Jung et al., 2004). One consequence of reaction with the negative volume change is the formation of a paired stress field as a result of strain compatibility across the reaction front (Hacker, 1996; Kirby et al., 1996). Here we analyze waveforms of a tiny seismic cluster in the lower crust of the downgoing Pacific plate at a depth of 155 km and propose new evidence in favor of this mechanism: tensional earthquakes lying 1 km above compressional earthquakes, and earthquakes with highly similar waveforms lying on well-defined planes with complementary rupture areas. The tensional stress is interpreted to be caused by the dimensional mismatch between crust transformed to eclogite and underlying untransformed crust, and the earthquakes are interpreted to be facilitated by fluid produced by eclogitization. These observations provide seismic evidence for the dual roles of volume-change related stresses and fluid-related embrittlement as viable processes for nucleating earthquakes in downgoing oceanic lithosphere.

  13. Validation of the Earthquake Archaeological Effects methodology by studying the San Clemente cemetery damages generated during the Lorca earthquake of 2011

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel; Martín-Velazquez, Silvia; Rodrigez-Pascua, Miguel Angel; Pérez-López, Raul; Silva, Pablo

    2014-05-01

    The intensity scales determined the damage caused by an earthquake. However, a new methodology takes into account not only the damage but the type of damage "Earthquake Archaeological Effects", EAE's, and its orientation (e.g. displaced masonry blocks, conjugated fractures, fallen and oriented columns, impact marks, dipping broken corners, etc.) (Rodriguez-Pascua et al., 2011; Giner-Robles et al., 2012). Its main contribution is that it focuses not only on the amount of damage but also in its orientation, giving information about the ground motion during the earthquake. Therefore, this orientations and instrumental data can be correlated with historical earthquakes. In 2011 an earthquake of magnitude Mw 5.2 took place in Lorca (SE Spain) (9 casualties and 460 million Euros in reparations). The study of the EAE's was carried out through the whole city (Giner-Robles et al., 2012). The present study aimed to a.- validate the EAE's methodology using it only in a small place, specifically the cemetery of San Clemente in Lorca, and b.- constraining the range of orientation for each EAE's. This cemetery has been selected because these damage orientation data can be correlated with instrumental information available, and also because this place has: a.- wide variety of architectural styles (neogothic, neobaroque, neoarabian), b.- its Cultural Interest (BIC), and c.- different building materials (brick, limestone, marble). The procedure involved two main phases: a.- inventory and identification of damage (EAE's) by pictures, and b.- analysis of the damage orientations. The orientation was calculated for each EAE's and plotted in maps. Results are NW-SE damage orientation. This orientation is consistent with that recorded in the accelerometer of Lorca (N160°E) and with that obtained from the analysis of EAE's for the whole town of Lorca (N130°E) (Giner-Robles et al., 2012). Due to the existence of an accelerometer, we know the orientation of the peak ground acceleration

  14. Seismic rupture process of the 2010 Haiti Earthquake (Mw7.0) inferred from seismic and SAR data

    NASA Astrophysics Data System (ADS)

    Santos, Rúben; Caldeira, Bento; Borges, José; Bezzeghoud, Mourad

    2013-04-01

    On January 12th 2010 at 21:53, the Port-au-Prince - Haiti region was struck by an Mw7 earthquake, the second most deadly of the history. The last seismic significant events in the region occurred in November 1751 and June 1770 [1]. Geodetic and geological studies, previous to the 2010 earthquake [2] have warned to the potential of the destructive seismic events in that region and this event has confirmed those warnings. Some aspects of the source of this earthquake are nonconsensual. There is no agreement in the mechanism of rupture or correlation with the fault that should have it generated [3]. In order to better understand the complexity of this rupture, we combined several techniques and data of different nature. We used teleseismic body-wave and Synthetic Aperture Radar data (SAR) based on the following methodology: 1) analysis of the rupture process directivity [4] to determine the velocity and direction of rupture; 2) teleseismic body-wave inversion to obtain the spatiotemporal fault slip distribution and a detailed rupture model; 3) near field surface deformation modeling using the calculated seismic rupture model and compared with the measured deformation field using SAR data of sensor Advanced Land Observing Satellite - Phased Array L-band SAR (ALOS-PALSAR). The combined application of seismic and geodetic data reveals a complex rupture that spread during approximately 12s mainly from WNW to ESE with average velocity of 2,5km/s, on a north-dipping fault plane. Two main asperities are obtained: the first (and largest) occurs within the first ~ 5sec and extends for approximately 6km around the hypocenter; the second one, that happens in the remaining 6s, covers a near surface rectangular strip with about 12km long by 3km wide. The first asperity is compatible with a left lateral strike-slip motion with a small reverse component; the mechanism of second asperity is predominantly reverse. The obtained rupture process allows modeling a coseismic deformation

  15. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  16. Earthquake clouds and physical mechanism of their formation.

    NASA Astrophysics Data System (ADS)

    Doda, L.; Pulinets, S.

    2006-12-01

    The Lithosphere-Atmosphere-Ionosphere (LAI) coupling model created recently permitted to explain some unknown phenomena observed around the time of strong earthquakes. One of them is formation of special shape clouds, usually presented as the thin linear structures. It was discovered that these clouds are associated with the active tectonic faults or with the tectonic plate borders. They repeat the fault shape but usually are turned in relation to the fault position. Their formation is explained by the anomalous vertical electric field generated in the vicinity of active tectonic structure due to air ionization produced by the radon increased emanation. The new formed ions through the hydration process do not recombine and growth with time due to increased water molecules attachment to the ion. Simultaneously they move up driven by the anomalous electric field and drift in the crossed ExB fields. At the higher altitudes the large ion clusters become the centers of condensation and the cloud formation. Examples for the recent major earthquakes (Sumatra 2004, Kashmir 2005, Java 2006) are presented. The size and the angle of the cloud rotation in relation to the fault position permit to estimate the magnitude of the impending earthquake.

  17. Repeating Earthquakes Following an Mw 4.4 Earthquake Near Luther, Oklahoma

    NASA Astrophysics Data System (ADS)

    Clements, T.; Keranen, K. M.; Savage, H. M.

    2015-12-01

    An Mw 4.4 earthquake on April 16, 2013 near Luther, OK was one of the earliest M4+ earthquakes in central Oklahoma, following the Prague sequence in 2011. A network of four local broadband seismometers deployed within a day of the Mw 4.4 event, along with six Oklahoma netquake stations, recorded more than 500 aftershocks in the two weeks following the Luther earthquake. Here we use HypoDD (Waldhauser & Ellsworth, 2000) and waveform cross-correlation to obtain precise aftershock locations. The location uncertainty, calculated using the SVD method in HypoDD, is ~15 m horizontally and ~ 35 m vertically. The earthquakes define a near vertical, NE-SW striking fault plane. Events occur at depths from 2 km to 3.5 km within the granitic basement, with a small fraction of events shallower, near the sediment-basement interface. Earthquakes occur within a zone of ~200 meters thickness on either side of the best-fitting fault surface. We use an equivalency class algorithm to identity clusters of repeating events, defined as event pairs with median three-component correlation > 0.97 across common stations (Aster & Scott, 1993). Repeating events occur as doublets of only two events in over 50% of cases; overall, 41% of earthquakes recorded occur as repeating events. The recurrence intervals for the repeating events range from minutes to days, with common recurrence intervals of less than two minutes. While clusters occur in tight dimensions, commonly of 80 m x 200 m, aftershocks occur in 3 distinct ~2km x 2km-sized patches along the fault. Our analysis suggests that with rapidly deployed local arrays, the plethora of ~Mw 4 earthquakes occurring in Oklahoma and Southern Kansas can be used to investigate the earthquake rupture process and the role of damage zones.

  18. Determination of source process and the tsunami simulation of the 2013 Santa Cruz earthquake

    NASA Astrophysics Data System (ADS)

    Park, S. C.; Lee, J. W.; Park, E.; Kim, S.

    2014-12-01

    In order to understand the characteristics of large tsunamigenic earthquakes, we analyzed the earthquake source process of the 2013 Santa Cruz earthquake and simulated the following tsunami. We first estimated the fault length of about 200 km using 3-day aftershock distribution and the source duration of about 110 seconds using the duration of high-frequency energy radiation (Hara, 2007). Moment magnitude was estimated to be 8.0 using the formula of Hara (2007). From the results of 200 km of fault length and 110 seconds of source duration, we used the initial value of rupture velocity as 1.8 km/s for teleseismic waveform inversions. Teleseismic body wave inversion was carried out using the inversion package by Kikuchi and Kanamori (1991). Teleseismic P waveform data from 14 stations were used and band-pass filter of 0.005 ~ 1 Hz was applied. Our best-fit solution indicated that the earthquake occurred on the northwesterly striking (strike = 305) and shallowly dipping (dip = 13) fault plane. Focal depth was determined to be 23 km indicating shallow event. Moment magnitude of 7.8 was obtained showing somewhat smaller than the result obtained above and that of previous study (Lay et al., 2013). Large slip area was seen around the hypocenter. Using the slip distribution obtained by teleseismic waveform inversion, we calculated the surface deformations using formulas of Okada (1985) assuming as the initial change of sea water by tsunami. Then tsunami simulation was carred out using Conell Multi-grid Coupled Tsunami Model (COMCOT) code and 1 min-grid topographic data for water depth from the General Bathymetric Chart of the Ocenas (GEBCO). According to the tsunami simulation, most of tsunami waves propagated to the directions of southwest and northeast which are perpendicular to the fault strike. DART buoy data were used to verify our simulation. In the presentation, we will discuss more details on the results of source process and tsunami simulation and compare them

  19. A Coupled Earthquake-Tsunami Simulation Framework Applied to the Sumatra 2004 Event

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Bader, Michael; Behrens, Jörn; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Wollherr, Stephanie; van Zelst, Iris

    2017-04-01

    Large earthquakes along subduction zone interfaces have generated destructive tsunamis near Chile in 1960, Sumatra in 2004, and northeast Japan in 2011. In order to better understand these extreme events, we have developed tools for physics-based, coupled earthquake-tsunami simulations. This simulation framework is applied to the 2004 Indian Ocean M 9.1-9.3 earthquake and tsunami, a devastating event that resulted in the loss of more than 230,000 lives. The earthquake rupture simulation is performed using an ADER discontinuous Galerkin discretization on an unstructured tetrahedral mesh with the software SeisSol. Advantages of this approach include accurate representation of complex fault and sea floor geometries and a parallelized and efficient workflow in high-performance computing environments. Accurate and efficient representation of the tsunami evolution and inundation at the coast is achieved with an adaptive mesh discretizing the shallow water equations with a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme. With the application of the framework to this historic event, we aim to better understand the involved mechanisms between the dynamic earthquake within the earth's crust, the resulting tsunami wave within the ocean, and the final coastal inundation process. Earthquake model results are constrained by GPS surface displacements and tsunami model results are compared with buoy and inundation data. This research is part of the ASCETE Project, "Advanced Simulation of Coupled Earthquake and Tsunami Events", funded by the Volkswagen Foundation.

  20. A coccidioidomycosis outbreak following the Northridge, Calif, earthquake

    USGS Publications Warehouse

    Schneider, E.; Hajjeh, R.A.; Spiegel, R.A.; Jibson, R.W.; Harp, E.L.; Marshall, G.A.; Gunn, R.A.; McNeil, M.M.; Pinner, R.W.; Baron, R.C.; Burger, R.C.; Hutwagner, L.C.; Crump, C.; Kaufman, L.; Reef, S.E.; Feldman, G.M.; Pappagianis, D.; Werner, S.B.

    1997-01-01

    Objective. - To describe a coccidioidomycosis outbreak in Ventura County following the January 1994 earthquake, centered in Northridge, Calif, and to identify factors that increased the risk for acquiring acute coccidioidomycosis infection. Design. - Epidemic investigation, population- based skin test survey, and case-control study. Setting. - Ventura County, California. Results. - In Ventura County, between January 24 and March 15, 1994, 203 outbreak-associated coccidioidomycosis cases, including 3 fatalities, were identified (attack rate [AR], 30 cases per 100 000 population). The majority of cases (56%) and the highest AR (114 per 100 000 population) occurred in the town of Simi Valley, a community located at the base of a mountain range that experienced numerous landslides associated with the earthquake. Disease onset for cases peaked 2 weeks after the earthquake. The AR was 2.8 times greater for persons 40 years of age and older than for younger persons (relative risk, 2.8; 95% confidence interval [CI], 2.1-3.7; P<.001). Environmental data indicated that large dust clouds, generated by landslides following the earthquake and strong aftershocks in the Santa Susana Mountains north of Simi Valley, were dispersed into nearby valleys by northeast winds. Simi Valley case-control study data indicated that physically being in a dust cloud (odds ratio, 3.0; 95% CI, 1.6-5.4; P<.001) and time spent in a dust cloud (P<.001) significantly increased the risk for being diagnosed with acute coccidioidomycosis. Conclusions. - Both the location and timing of cases strongly suggest that the coccidioidomycosis outbreak in Ventura County was caused when arthrospores were spread in dust clouds generated by the earthquake. This is the first report of a coccidioidomycosis outbreak following an earthquake. Public and physician awareness, especially in endemic areas following similar dust cloud- generating events, may result in prevention and early recognition of acute

  1. Social Media as Seismic Networks for the Earthquake Damage Assessment

    NASA Astrophysics Data System (ADS)

    Meletti, C.; Cresci, S.; La Polla, M. N.; Marchetti, A.; Tesconi, M.

    2014-12-01

    The growing popularity of online platforms, based on user-generated content, is gradually creating a digital world that mirrors the physical world. In the paradigm of crowdsensing, the crowd becomes a distributed network of sensors that allows us to understand real life events at a quasi-real-time rate. The SoS-Social Sensing project [http://socialsensing.it/] exploits the opportunistic crowdsensing, involving users in the sensing process in a minimal way, for social media emergency management purposes in order to obtain a very fast, but still reliable, detection of emergency dimension to face. First of all we designed and implemented a decision support system for the detection and the damage assessment of earthquakes. Our system exploits the messages shared in real-time on Twitter. In the detection phase, data mining and natural language processing techniques are firstly adopted to select meaningful and comprehensive sets of tweets. Then we applied a burst detection algorithm in order to promptly identify outbreaking seismic events. Using georeferenced tweets and reported locality names, a rough epicentral determination is also possible. The results, compared to Italian INGV official reports, show that the system is able to detect, within seconds, events of a magnitude in the region of 3.5 with a precision of 75% and a recall of 81,82%. We then focused our attention on damage assessment phase. We investigated the possibility to exploit social media data to estimate earthquake intensity. We designed a set of predictive linear models and evaluated their ability to map the intensity of worldwide earthquakes. The models build on a dataset of almost 5 million tweets exploited to compute our earthquake features, and more than 7,000 globally distributed earthquakes data, acquired in a semi-automatic way from USGS, serving as ground truth. We extracted 45 distinct features falling into four categories: profile, tweet, time and linguistic. We run diagnostic tests and

  2. Make an Earthquake: Ground Shaking!

    ERIC Educational Resources Information Center

    Savasci, Funda

    2011-01-01

    The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…

  3. Broadband Rupture Process of the 2001 Kunlun Fault (Mw 7.8) Earthquake

    NASA Astrophysics Data System (ADS)

    Antolik, M.; Abercrombie, R.; Ekstrom, G.

    2003-04-01

    We model the source process of the 14 November, 2001 Kunlun fault earthquake using broadband body waves from the Global Digital Seismographic Network (P, SH) and both point-source and distributed slip techniques. The point-source mechanism technique is a non-linear iterative inversion that solves for focal mechanism, moment rate function, depth, and rupture directivity. The P waves reveal a complex rupture process for the first 30 s, with smooth unilateral rupture toward the east along the Kunlun fault accounting for the remainder of the 120 s long rupture. The obtained focal mechanism for the main portion of the rupture is (strike=96o, dip=83o, rake=-8o) which is consistent with both the Harvard CMT solution and observations of the surface rupture. The seismic moment is 5.29×1020 Nm and the average rupture velocity is ˜3.5 km/s. However, the initial portion of the P waves cannot be fit at all with this mechanism. A strong pulse visible in the first 20 s can only be matched with an oblique-slip subevent (MW ˜ 6.8-7.0) involving a substantial normal faulting component, but the nodal planes of this mechanism are not well constrained. The first-motion polarities of the P waves clearly require a strike mechanism with a similar orientation as the Kunlun fault. Field observations of the surface rupture (Xu et al., SRL, 73, No. 6) reveal a small 26 km-long strike-slip rupture at the far western end (90.5o E) with a 45-km long gap and extensional step-over between this rupture and the main Kunlun fault rupture. We hypothesize that the initial fault break occurred on this segment, with release of the normal faulting energy as a continuous rupture through the extensional step, enabling transfer of the slip to the main Kunlun fault. This process is similar to that which occurred during the 2002 Denali fault (MW 7.9) earthquake sequence except that 11 days elapsed between the October 23 (M_W 6.7) foreshock and the initial break of the Denali earthquake along a thrust fault.

  4. Rupture Speed and Dynamic Frictional Processes for the 1995 ML4.1 Shacheng, Hebei, China, Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Liu, B.; Shi, B.

    2010-12-01

    An earthquake with ML4.1 occurred at Shacheng, Hebei, China, on July 20, 1995, followed by 28 aftershocks with 0.9≤ML≤4.0 (Chen et al, 2005). According to ZÚÑIGA (1993), for the 1995 ML4.1 Shacheng earthquake sequence, the main shock is corresponding to undershoot, while aftershocks should match overshoot. With the suggestion that the dynamic rupture processes of the overshoot aftershocks could be related to the crack (sub-fault) extension inside the main fault. After main shock, the local stresses concentration inside the fault may play a dominant role in sustain the crack extending. Therefore, the main energy dissipation mechanism should be the aftershocks fracturing process associated with the crack extending. We derived minimum radiation energy criterion (MREC) following variational principle (Kanamori and Rivera, 2004)(ES/M0')min≧[3M0/(ɛπμR3)](v/β)3, where ES and M0' are radiated energy and seismic moment gained from observation, μ is the modulus of fault rigidity, ɛ is the parameter of ɛ=M0'/M0,M0 is seismic moment and R is rupture size on the fault, v and β are rupture speed and S-wave speed. From II and III crack extending model, we attempt to reconcile a uniform expression for calculate seismic radiation efficiency ηG, which can be used to restrict the upper limit efficiency and avoid the non-physics phenomenon that radiation efficiency is larger than 1. In ML 4.1 Shacheng earthquake sequence, the rupture speed of the main shock was about 0.86 of S-wave speed β according to MREC, closing to the Rayleigh wave speed, while the rupture speeds of the remained 28 aftershocks ranged from 0.05β to 0.55β. The rupture speed was 0.9β, and most of the aftershocks are no more than 0.35β using II and III crack extending model. In addition, the seismic radiation efficiencies for this earthquake sequence were: for the most aftershocks, the radiation efficiencies were less than 10%, inferring a low seismic efficiency, whereas the radiation efficiency

  5. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  6. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  7. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  8. A fragmentation model of earthquake-like behavior in internet access activity

    NASA Astrophysics Data System (ADS)

    Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.

    We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.

  9. Defining "Acceptable Risk" for Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Tucker, B.

    2001-05-01

    The greatest and most rapidly growing earthquake risk for mortality is in developing countries. Further, earthquake risk management actions of the last 50 years have reduced the average lethality of earthquakes in earthquake-threatened industrialized countries. (This is separate from the trend of the increasing fiscal cost of earthquakes there.) Despite these clear trends, every new earthquake in developing countries is described in the media as a "wake up" call, announcing the risk these countries face. GeoHazards International (GHI) works at both the community and the policy levels to try to reduce earthquake risk. GHI reduces death and injury by helping vulnerable communities recognize their risk and the methods to manage it, by raising awareness of its risk, building local institutions to manage that risk, and strengthening schools to protect and train the community's future generations. At the policy level, GHI, in collaboration with research partners, is examining whether "acceptance" of these large risks by people in these countries and by international aid and development organizations explains the lack of activity in reducing these risks. The goal of this pilot project - The Global Earthquake Safety Initiative (GESI) - is to develop and evaluate a means of measuring the risk and the effectiveness of risk mitigation actions in the world's largest, most vulnerable cities: in short, to develop an earthquake risk index. One application of this index is to compare the risk and the risk mitigation effort of "comparable" cities. By this means, Lima, for example, can compare the risk of its citizens dying due to earthquakes with the risk of citizens in Santiago and Guayaquil. The authorities of Delhi and Islamabad can compare the relative risk from earthquakes of their school children. This index can be used to measure the effectiveness of alternate mitigation projects, to set goals for mitigation projects, and to plot progress meeting those goals. The preliminary

  10. USGS Tweet Earthquake Dispatch (@USGSted): Using Twitter for Earthquake Detection and Characterization

    NASA Astrophysics Data System (ADS)

    Liu, S. B.; Bouchard, B.; Bowden, D. C.; Guy, M.; Earle, P.

    2012-12-01

    The U.S. Geological Survey (USGS) is investigating how online social networking services like Twitter—a microblogging service for sending and reading public text-based messages of up to 140 characters—can augment USGS earthquake response products and the delivery of hazard information. The USGS Tweet Earthquake Dispatch (TED) system is using Twitter not only to broadcast seismically-verified earthquake alerts via the @USGSted and @USGSbigquakes Twitter accounts, but also to rapidly detect widely felt seismic events through a real-time detection system. The detector algorithm scans for significant increases in tweets containing the word "earthquake" or its equivalent in other languages and sends internal alerts with the detection time, tweet text, and the location of the city where most of the tweets originated. It has been running in real-time for 7 months and finds, on average, two or three felt events per day with a false detection rate of less than 10%. The detections have reasonable coverage of populated areas globally. The number of detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The main benefit of the tweet-based detections is speed, with most detections occurring between 19 seconds and 2 minutes from the origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. Going beyond the initial detection, the USGS is developing data mining techniques to continuously archive and analyze relevant tweets for additional details about the detected events. The information generated about an event is displayed on a web-based map designed using HTML5 for the mobile environment, which can be valuable when the user is not able to access a

  11. Earthquake prediction: the interaction of public policy and science.

    PubMed Central

    Jones, L M

    1996-01-01

    Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

  12. Post-earthquake building safety assessments for the Canterbury Earthquakes

    USGS Publications Warehouse

    Marshall, J.; Barnes, J.; Gould, N.; Jaiswal, K.; Lizundia, B.; Swanson, David A.; Turner, F.

    2012-01-01

    This paper explores the post-earthquake building assessment program that was utilized in Christchurch, New Zealand following the Canterbury Sequence of earthquakes beginning with the Magnitude (Mw.) 7.1 Darfield event in September 2010. The aftershocks or triggered events, two of which exceeded Mw 6.0, continued with events in February and June 2011 causing the greatest amount of damage. More than 70,000 building safety assessments were completed following the February event. The timeline and assessment procedures will be discussed including the use of rapid response teams, selection of indicator buildings to monitor damage following aftershocks, risk assessments for demolition of red-tagged buildings, the use of task forces to address management of the heavily damaged downtown area and the process of demolition. Through the post-event safety assessment program that occurred throughout the Canterbury Sequence of earthquakes, many important lessons can be learned that will benefit future response to natural hazards that have potential to damage structures.

  13. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not

  14. Comparison of aftershock sequences between 1975 Haicheng earthquake and 1976 Tangshan earthquake

    NASA Astrophysics Data System (ADS)

    Liu, B.

    2017-12-01

    The 1975 ML 7.3 Haicheng earthquake and the 1976 ML 7.8 Tangshan earthquake occurred in the same tectonic unit. There are significant differences in spatial-temporal distribution, number of aftershocks and time duration for the aftershock sequence followed by these two main shocks. As we all know, aftershocks could be triggered by the regional seismicity change derived from the main shock, which was caused by the Coulomb stress perturbation. Based on the rate- and state- dependent friction law, we quantitative estimated the possible aftershock time duration with a combination of seismicity data, and compared the results from different approaches. The results indicate that, aftershock time durations from the Tangshan main shock is several times of that form the Haicheng main shock. This can be explained by the significant relationship between aftershock time duration and earthquake nucleation history, normal stressand shear stress loading rateon the fault. In fact the obvious difference of earthquake nucleation history from these two main shocks is the foreshocks. 1975 Haicheng earthquake has clear and long foreshocks, while 1976 Tangshan earthquake did not have clear foreshocks. In that case, abundant foreshocks may mean a long and active nucleation process that may have changed (weakened) the rocks in the source regions, so they should have a shorter aftershock sequences for the reason that stress in weak rocks decay faster.

  15. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed Central

    Kanamori, H

    1996-01-01

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding. Images Fig. 8 PMID:11607657

  16. Initiation process of earthquakes and its implications for seismic hazard reduction strategy.

    PubMed

    Kanamori, H

    1996-04-30

    For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.

  17. The surface latent heat flux anomalies related to major earthquake

    NASA Astrophysics Data System (ADS)

    Jing, Feng; Shen, Xuhui; Kang, Chunli; Xiong, Pan; Hong, Shunying

    2011-12-01

    SLHF (Surface Latent Heat Flux) is an atmospheric parameter, which can describe the heat released by phase changes and dependent on meteorological parameters such as surface temperature, relative humidity, wind speed etc. There is a sharp difference between the ocean surface and the land surface. Recently, many studies related to the SLHF anomalies prior to earthquakes have been developed. It has been shown that the energy exchange enhanced between coastal surface and atmosphere prior to earthquakes can increase the rate of the water-heat exchange, which will lead to an obviously increases in SLHF. In this paper, two earthquakes in 2010 (Haiti earthquake and southwest of Sumatra in Indonesia earthquake) have been analyzed using SLHF data by STD (standard deviation) threshold method. It is shows that the SLHF anomaly may occur in interpolate earthquakes or intraplate earthquakes and coastal earthquakes or island earthquakes. And the SLHF anomalies usually appear 5-6 days prior to an earthquake, then disappear quickly after the event. The process of anomaly evolution to a certain extent reflects a dynamic energy change process about earthquake preparation, that is, weak-strong-weak-disappeared.

  18. Discussion of New Approaches to Medium-Short-Term Earthquake Forecast in Practice of The Earthquake Prediction in Yunnan

    NASA Astrophysics Data System (ADS)

    Hong, F.

    2017-12-01

    After retrospection of years of practice of the earthquake prediction in Yunnan area, it is widely considered that the fixed-point earthquake precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of earthquakes, however it is difficult to obtain the spatial relevance between earthquakes and precursory anomalies, thus we can hardly predict the spatial locations of earthquakes using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predicting earthquakes locations, resulting from the increased seismicity were observed before 80% M=6.0 earthquakes in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predicting earthquakes locations in recent year, for instance, the forecasted earthquakes occurring time and area derived form the 1-year-scale geomagnetic anomalies before the M6.5 Ludian earthquake in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term earthquake forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small earthquakes.

  19. Estimation of Surface Deformation due to Pasni Earthquake Using SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Ali, M.; Shahzad, M. I.; Nazeer, M.; Kazmi, J. H.

    2018-04-01

    Earthquake cause ground deformation in sedimented surface areas like Pasni and that is a hazard. Such earthquake induced ground displacements can seriously damage building structures. On 7 February 2017, an earthquake with 6.3 magnitudes strike near to Pasni. We have successfully distinguished widely spread ground displacements for the Pasni earthquake by using InSAR-based analysis with Sentinel-1 satellite C-band data. The maps of surface displacement field resulting from the earthquake are generated. Sentinel-1 Wide Swath data acquired from 9 December 2016 to 28 February 2017 was used to generate displacement map. The interferogram revealed the area of deformation. The comparison map of interferometric vertical displacement in different time period was treated as an evidence of deformation caused by earthquake. Profile graphs of interferogram were created to estimate the vertical displacement range and trend. Pasni lies in strong earthquake magnitude effected area. The major surface deformation areas are divided into different zones based on significance of deformation. The average displacement in Pasni is estimated about 250 mm. Maximum pasni area is uplifted by earthquake and maximum uplifting occurs was about 1200 mm. Some of areas was subsidized like the areas near to shoreline and maximum subsidence was estimated about 1500 mm. Pasni is facing many problems due to increasing sea water intrusion under prevailing climatic change where land deformation due to a strong earthquake can augment its vulnerability.

  20. Earthquakes in Fiordland, Southern Chile: Initiation and Development of a Magmatic Process

    NASA Astrophysics Data System (ADS)

    Barrientos, S.; Service, N. S.

    2007-05-01

    Several efforts in Chile are being conducted in relation to geophysical monitoring with the objective of disaster mitigation. A long and permanent monitoring effort along the country has been the continuous effort resulting in the recognition and delineation of new seismogenic sources. Here we report on the seismo-volcanic crisis that is currently taking place in the in the region close to the triple junction (Nazca, Antarctica and South America) in southern Chile at around latitude 45°S. On January 22, 2007, an intensity V-VI (MMI) earthquake shook the cities of Puerto Aysén, Puerto Chacabuco and Coyhaique. This magnitude 5 event, was the first of a series of earthquakes that have taken place in the region for nearly a month and a half (until end of February, time when this abstract was written). The closest station to the source area -part of the GEOSCOPE network located in Coyhaique, about 80 km away from the epicenters- reveals seismic activity about 3 hours before the first event. Immediately after the first event, more than 20 events per hour were detected and recorded by this station, rate which decreased with time with the exception of those time intervals following larger events. More than six events with magnitude 5 or more have been recorded. Five seismic stations were installed surrounding the epicentral area between 27 - 29 January and are currently operational. After processing some of the recorded events, a sixth station was installed at the closest possible site of the source of the seismic activity. Preliminary analysis of the recorded seismic activity reveals a concentration of hypocenters - 5 to 10 km depth- along an eight-km NNE-SSW vertical plane crossing the Aysén fiord. Harmonic tremor has also been detected. This seismic activity is interpreted as the result of a magmatic process in progress which will most likely culminate in the generation of a new underwater volcanic edifice. Because the seismic activity fully extends across the Ays

  1. Performance of Irikura's Recipe Rupture Model Generator in Earthquake Ground Motion Simulations as Implemented in the Graves and Pitarka Hybrid Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitarka, A.

    We analyzed the performance of the Irikura and Miyake (2011) (IM2011) asperity-­ based kinematic rupture model generator, as implemented in the hybrid broadband ground-­motion simulation methodology of Graves and Pitarka (2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0 -­ 20Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with bothmore » IM2011 and the rupture generation method of Graves and Pitarka (2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-­West2 Project (NGA-­West2) ground-­motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-­fault distances (<12km) and at long periods (>1s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1 – 3 sec where the IM2011 motions are about 20 – 30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1 – 3 second bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study.« less

  2. How fault geometry controls earthquake magnitude

    NASA Astrophysics Data System (ADS)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  3. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  4. New approach to analysis of strongest earthquakes with upper-value magnitude in subduction zones and induced by them catastrophic tsunamis on examples of catastrophic events in 21 century

    NASA Astrophysics Data System (ADS)

    Garagash, I. A.; Lobkovsky, L. I.; Mazova, R. Kh.

    2012-04-01

    The study of generation of strongest earthquakes with upper-value magnitude (near above 9) and induced by them catastrophic tsunamis, is performed by authors on the basis of new approach to the generation process, occurring in subduction zones under earthquake. The necessity of performing of such studies is connected with recent 11 March 2011 catastrophic underwater earthquake close to north-east Japan coastline and following it catastrophic tsunami which had led to vast victims and colossal damage for Japan. The essential importance in this study is determined by unexpected for all specialists the strength of earthquake occurred (determined by magnitude M = 9), inducing strongest tsunami with wave height runup on the beach up to 10 meters. The elaborated by us model of interaction of ocean lithosphere with island-arc blocks in subduction zones, with taking into account of incomplete stress discharge at realization of seismic process and further accumulation of elastic energy, permits to explain arising of strongest mega-earthquakes, such as catastrophic earthquake with source in Japan deep-sea trench in March, 2011. In our model, the wide possibility for numerical simulation of dynamical behaviour of underwater seismic source is provided by kinematical model of seismic source as well as by elaborated by authors numerical program for calculation of tsunami wave generation by dynamical and kinematical seismic sources. The method obtained permits take into account the contribution of residual tectonic stress in lithosphere plates, leading to increase of earthquake energy, which is usually not taken into account up to date.

  5. The use of the Finite Element method for the earthquakes modelling in different geodynamic environments

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Tizzani, Pietro

    2016-04-01

    Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally

  6. A combined source and site-effect study of ground motions generated by an earthquake in Port au Prince (Haiti)

    NASA Astrophysics Data System (ADS)

    St Fleur, Sadrac; Courboulex, Francoise; Bertrand, Etienne; Deschamps, Anne; Mercier de Lepinay, Bernard; Prepetit, Claude; Hough, Suzan

    2013-04-01

    We present the preliminary results of a study with the aim of understanding how some combinations of source and site effects can generate extreme ground motions in the city of Port au Prince. For this study, we have used the recordings of several tens of earthquakes with magnitude larger than 3.0 at 3 to 14 stations from three networks: 3 stations of the Canadian Broad-band network (RNCan), 2 stations of the educational French network (SaE) and 9 stations of the accelerometric network (Bureau des Mines et de l'Energie of Port au Prince and US Geological survey). In order to estimate site effects under each station, we have applied classical spectral ratio methods: The H/V (Horizontal/Vertical) method was first used to select a reference station, which was itself used in a site/reference method. Because a true reference station was not available, we have used successively stations HCEA, then station PAPH, then an average value of 3 stations. In the frequency range studied (0.5 - 20 Hz), we found a site-to-reference ratio up to 3 to 8. However, these values present a large variability, depending on the earthquake recordings. This may indicate that the observed amplification from one station to the other depends not only from the local site effect but also from the source. We then used the same earthquake recordings as Empirical Green's Functions (EGF) in order to simulate the ground motions generated by a virtual earthquake. For this simulation, we have used a stochastic EGF summation method. We have worked on the simulation of a magnitude Mw=6.8 using successively 2 smaller events that occurred on the Leogane fault as EGF. The results obtained using the two events are surprisingly very different. Using the first EGF, we obtained almost the same ground motion values at each station in Port au Prince, whereas with the second EGF, the results highlight large differences. The large variability obtained in the results indicates that a particular combination of site and

  7. Surface sediment remobilization triggered by earthquakes in the Nankai forearc region

    NASA Astrophysics Data System (ADS)

    Okutsu, N.; Ashi, J.; Yamaguchi, A.; Irino, T.; Ikehara, K.; Kanamatsu, T.; Suganuma, Y.; Murayama, M.

    2017-12-01

    Submarine landslides triggered by earthquakes generate turbidity currents (e.g. Piper et al., 1988; 1999). Recently several studies report that the remobilization of the surface sediment triggered by earthquakes can also generate turbidity currents. However, studies that proposed such process are still limited (e.g. Ikehara et al., 2016; Mchugh et al., 2016; Moernaut et al., 2017). The purpose of this study is to examine those sedimentary processes in the Nankai forearc region, SW Japan using sedimentary records. We collected 46 cm-long multiple core (MC01) and a 6.7 m-long piston core (PC03) from the small basin during the R/V Shinsei Maru KS-14-8 cruise. The small confined basin, which is our study site, block the paths of direct sediment supply from river-submarine canyon system. The sampling site is located at the ENE-WSW elongated basin between the accretionary prism and the forearc basin off Kumano without direct sediment supply from river-submarine canyon system. The basin exhibits a confined basin that captures almost of sediments supplied from outside. Core samples are mainly composed of silty clay or very fine sand. Cs-137 measurement conducted on a MC01 core shows constantly high value at the upper 17 cm section and no detection below it. Moreover, the sedimentary structure is similar to fine-grained turbidite described by Stow and Shanmgam (1980), we interpret the upper 17 cm of MC01 as muddy turbidite. Grain size distribution and magnetic susceptibility also agree to this interpretation. Rapid sediment deposition after 1950 is assumed and the most likely event is the 2004 off Kii peninsula earthquakes (Mw=6.6-7.4). By calculation from extent of provenance area, which are estimated by paleocurrent analysis and bathymetric map, and thickness of turbidite layer we conclude that surface 1 cm of slope sediments may be remobilized by the 2004 earthquakes. Muddy turbidites are also identified in a PC03 core. The radiocarbon age gap of 170 years obtained

  8. Rupture process of the 2016 Mw 7.8 Ecuador earthquake from joint inversion of InSAR data and teleseismic P waveforms

    NASA Astrophysics Data System (ADS)

    Yi, Lei; Xu, Caijun; Wen, Yangmao; Zhang, Xu; Jiang, Guoyan

    2018-01-01

    The 2016 Ecuador earthquake ruptured the Ecuador-Colombia subduction interface where several historic megathrust earthquakes had occurred. In order to determine a detailed rupture model, Interferometric Synthetic Aperture Radar (InSAR) images and teleseismic data sets were objectively weighted by using a modified Akaika's Bayesian Information Criterion (ABIC) method to jointly invert for the rupture process of the earthquake. In modeling the rupture process, a constrained waveform length method, unlike the traditional subjective selected waveform length method, was used since the lengths of inverted waveforms were strictly constrained by the rupture velocity and rise time (the slip duration time). The optimal rupture velocity and rise time of the earthquake were estimated from grid search, which were determined to be 2.0 km/s and 20 s, respectively. The inverted model shows that the event is dominated by thrust movement and the released moment is 5.75 × 1020 Nm (Mw 7.77). The slip distribution extends southward along the Ecuador coast line in an elongated stripe at a depth between 10 and 25 km. The slip model is composed of two asperities and slipped over 4 m. The source time function is approximate 80 s that separated into two segments corresponding to the two asperities. The small magnitude of the slip occurred in the updip section of the fault plane resulted in small tsunami waves that were verified by observations near the coast. We suggest a possible situation that the rupture zone of the 2016 earthquake is likely not overlapped with that of the 1942 earthquake.

  9. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  10. Seismicity and seismic hazard in Sabah, East Malaysia from earthquake and geodetic data

    NASA Astrophysics Data System (ADS)

    Gilligan, A.; Rawlinson, N.; Tongkul, F.; Stephenson, R.

    2017-12-01

    While the levels of seismicity are low in most of Malaysia, the state of Sabah in northern Borneo has moderate levels of seismicity. Notable earthquakes in the region include the 1976 M6.2 Lahad Datu earthquake and the 2015 M6 Ranau earthquake. The recent Ranau earthquake resulted in the deaths of 18 people on Mt Kinabalu, an estimated 100 million RM ( US$23 million) damage to buildings, roads, and infrastructure from shaking, and flooding, reduced water quality, and damage to farms from landslides. Over the last 40 years the population of Sabah has increased to over four times what it was in 1976, yet seismic hazard in Sabah remains poorly understood. Using seismic and geodetic data we hope to better quantify the hazards posed by earthquakes in Sabah, and thus help to minimize risk. In order to do this we need to know about the locations of earthquakes, types of earthquakes that occur, and faults that are generating them. We use data from 15 MetMalaysia seismic stations currently operating in Sabah to develop a region-specific velocity model from receiver functions and a pre-existing surface wave model. We use this new velocity model to (re)locate earthquakes that occurred in Sabah from 2005-2016, including a large number of aftershocks from the 2015 Ranau earthquake. We use a probabilistic nonlinear earthquake location program to locate the earthquakes and then refine their relative locations using a double difference method. The recorded waveforms are further used to obtain moment tensor solutions for these earthquakes. Earthquake locations and moment tensor solutions are then compared with the locations of faults throughout Sabah. Faults are identified from high-resolution IFSAR images and subsequent fieldwork, with a particular focus on the Lahad Datau and Ranau areas. Used together, these seismic and geodetic data can help us to develop a new seismic hazard model for Sabah, as well as aiding in the delivery of outreach activities regarding seismic hazard

  11. Signatures of the seismic source in EMD-based characterization of the 1994 Northridge, California, earthquake recordings

    USGS Publications Warehouse

    Zhang, R.R.; Ma, S.; Hartzell, S.

    2003-01-01

    In this article we use empirical mode decomposition (EMD) to characterize the 1994 Northridge, California, earthquake records and investigate the signatures carried over from the source rupture process. Comparison of the current study results with existing source inverse solutions that use traditional data processing suggests that the EMD-based characterization contains information that sheds light on aspects of the earthquake rupture process. We first summarize the fundamentals of the EMD and illustrate its features through the analysis of a hypothetical and a real record. Typically, the Northridge strong-motion records are decomposed into eight or nine intrinsic mode functions (IMF's), each of which emphasizes a different oscillation mode with different amplitude and frequency content. The first IMF has the highest-frequency content; frequency content decreases with an increase in IMF component. With the aid of a finite-fault inversion method, we then examine aspects of the source of the 1994 Northridge earthquake that are reflected in the second to fifth IMF components. This study shows that the second IMF is predominantly wave motion generated near the hypocenter, with high-frequency content that might be related to a large stress drop associated with the initiation of the earthquake. As one progresses from the second to the fifth IMF component, there is a general migration of the source region away from the hypocenter with associated longer-period signals as the rupture propagates. This study suggests that the different IMF components carry information on the earthquake rupture process that is expressed in their different frequency bands.

  12. Earthquake activity along the Himalayan orogenic belt

    NASA Astrophysics Data System (ADS)

    Bai, L.; Mori, J. J.

    2017-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  13. The effects of earthquake measurement concepts and magnitude anchoring on individuals' perceptions of earthquake risk

    USGS Publications Warehouse

    Celsi, R.; Wolfinbarger, M.; Wald, D.

    2005-01-01

    The purpose of this research is to explore earthquake risk perceptions in California. Specifically, we examine the risk beliefs, feelings, and experiences of lay, professional, and expert individuals to explore how risk is perceived and how risk perceptions are formed relative to earthquakes. Our results indicate that individuals tend to perceptually underestimate the degree that earthquake (EQ) events may affect them. This occurs in large part because individuals' personal felt experience of EQ events are generally overestimated relative to experienced magnitudes. An important finding is that individuals engage in a process of "cognitive anchoring" of their felt EQ experience towards the reported earthquake magnitude size. The anchoring effect is moderated by the degree that individuals comprehend EQ magnitude measurement and EQ attenuation. Overall, the results of this research provide us with a deeper understanding of EQ risk perceptions, especially as they relate to individuals' understanding of EQ measurement and attenuation concepts. ?? 2005, Earthquake Engineering Research Institute.

  14. Is earthquake rate in south Iceland modified by seasonal loading?

    NASA Astrophysics Data System (ADS)

    Jonsson, S.; Aoki, Y.; Drouin, V.

    2017-12-01

    Several temporarily varying processes have the potential of modifying the rate of earthquakes in the south Iceland seismic zone, one of the two most active seismic zones in Iceland. These include solid earth tides, seasonal meteorological effects and influence from passing weather systems, and variations in snow and glacier loads. In this study we investigate the influence these processes may have on crustal stresses and stressing rates in the seismic zone and assess whether they appear to be influencing the earthquake rate. While historical earthquakes in the south Iceland have preferentially occurred in early summer, this tendency is less clear for small earthquakes. The local earthquake catalogue (going back to 1991, magnitude of completeness < 1.0) has indeed more earthquakes in summer than in winter. However, this pattern is strongly influenced by aftershock sequences of the largest M6+ earthquakes, which occurred in June 2000 and May 2008. Standard Reasenberg earthquake declustering or more involved model independent stochastic declustering algorithms are not capable of fully eliminating the aftershocks from the catalogue. We therefore inspected the catalogue for the time period before 2000 and it shows limited seasonal tendency in earthquake occurrence. Our preliminary results show no clear correlation between earthquake rates and short-term stressing variations induced from solid earth tides or passing storms. Seasonal meteorological effects also appear to be too small to influence the earthquake activity. Snow and glacier load variations induce significant vertical motions in the area with peak loading occurring in Spring (April-May) and maximum unloading in Fall (Sept.-Oct.). Early summer occurrence of historical earthquakes therefore correlates with early unloading rather than with the peak unloading or unloading rate, which appears to indicate limited influence of this seasonal process on the earthquake activity.

  15. Assessment of Vegetation Destruction Due to Wenchuan Earthquake and Its Recovery Process Using MODIS Data

    NASA Astrophysics Data System (ADS)

    Zou, Z.; Xiao, X.

    2015-12-01

    With a high temporal resolution and a large covering area, MODIS data are particularly useful in assessing vegetation destruction and recovery of a wide range of areas. In this study, MOD13Q1 data of the growing season (Mar. to Nov.) are used to calculate the Maximum NDVI (NDVImax) of each year. This study calculates each pixel's mean and standard deviation of the NDVImaxs in the 8 years before the earthquake. If the pixel's NDVImax of 2008 is two standard deviation smaller than the mean NDVImax, this pixel is detected as a vegetation destructed pixel. For each vegetation destructed pixel, its similar pixels of the same vegetation type are selected within the latitude difference of 0.5 degrees, altitude difference of 100 meters and slope difference of 3 degrees. Then the NDVImax difference of each vegetation destructed pixel and its similar pixels are calculated. The 5 similar pixels with the smallest NDVImax difference in the 8 years before the earthquake are selected as reference pixels. The mean NDVImaxs of these reference pixels after the earthquake are calculated and serve as the criterion to assess the vegetation recovery process.

  16. Global Positioning System data collection, processing, and analysis conducted by the U.S. Geological Survey Earthquake Hazards Program

    USGS Publications Warehouse

    Murray, Jessica R.; Svarc, Jerry L.

    2017-01-01

    The U.S. Geological Survey Earthquake Science Center collects and processes Global Positioning System (GPS) data throughout the western United States to measure crustal deformation related to earthquakes and tectonic processes as part of a long‐term program of research and monitoring. Here, we outline data collection procedures and present the GPS dataset built through repeated temporary deployments since 1992. This dataset consists of observations at ∼1950 locations. In addition, this article details our data processing and analysis procedures, which consist of the following. We process the raw data collected through temporary deployments, in addition to data from continuously operating western U.S. GPS stations operated by multiple agencies, using the GIPSY software package to obtain position time series. Subsequently, we align the positions to a common reference frame, determine the optimal parameters for a temporally correlated noise model, and apply this noise model when carrying out time‐series analysis to derive deformation measures, including constant interseismic velocities, coseismic offsets, and transient postseismic motion.

  17. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  18. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  19. Revisiting the 1872 Owens Valley, California, Earthquake

    USGS Publications Warehouse

    Hough, S.E.; Hutton, K.

    2008-01-01

    The 26 March 1872 Owens Valley earthquake is among the largest historical earthquakes in California. The felt area and maximum fault displacements have long been regarded as comparable to, if not greater than, those of the great San Andreas fault earthquakes of 1857 and 1906, but mapped surface ruptures of the latter two events were 2-3 times longer than that inferred for the 1872 rupture. The preferred magnitude estimate of the Owens Valley earthquake has thus been 7.4, based largely on the geological evidence. Reinterpreting macroseismic accounts of the Owens Valley earthquake, we infer generally lower intensity values than those estimated in earlier studies. Nonetheless, as recognized in the early twentieth century, the effects of this earthquake were still generally more dramatic at regional distances than the macroseismic effects from the 1906 earthquake, with light damage to masonry buildings at (nearest-fault) distances as large as 400 km. Macroseismic observations thus suggest a magnitude greater than that of the 1906 San Francisco earthquake, which appears to be at odds with geological observations. However, while the mapped rupture length of the Owens Valley earthquake is relatively low, the average slip was high. The surface rupture was also complex and extended over multiple fault segments. It was first mapped in detail over a century after the earthquake occurred, and recent evidence suggests it might have been longer than earlier studies indicated. Our preferred magnitude estimate is Mw 7.8-7.9, values that we show are consistent with the geological observations. The results of our study suggest that either the Owens Valley earthquake was larger than the 1906 San Francisco earthquake or that, by virtue of source properties and/or propagation effects, it produced systematically higher ground motions at regional distances. The latter possibility implies that some large earthquakes in California will generate significantly larger ground motions than San

  20. A comparison study of 2006 Java earthquake and other Tsunami earthquakes

    NASA Astrophysics Data System (ADS)

    Ji, C.; Shao, G.

    2006-12-01

    We revise the slip processes of July 17 2006 Java earthquakes by combined inverting teleseismic body wave, long period surface waves, as well as the broadband records at Christmas island (XMIS), which is 220 km away from the hypocenter and so far the closest observation for a Tsunami earthquake. Comparing with the previous studies, our approach considers the amplitude variations of surface waves with source depths as well as the contribution of ScS phase, which usually has amplitudes compatible with that of direct S phase for such low angle thrust earthquakes. The fault dip angles are also refined using the Love waves observed along fault strike direction. Our results indicate that the 2006 event initiated at a depth around 12 km and unilaterally rupture southeast for 150 sec with a speed of 1.0 km/sec. The revised fault dip is only about 6 degrees, smaller than the Harvard CMT (10.5 degrees) but consistent with that of 1994 Java earthquake. The smaller fault dip results in a larger moment magnitude (Mw=7.9) for a PREM earth, though it is dependent on the velocity structure used. After verified with 3D SEM forward simulation, we compare the inverted result with the revised slip models of 1994 Java and 1992 Nicaragua earthquakes derived using the same wavelet based finite fault inversion methodology.

  1. Earthquakes and faults in southern California (1970-2010)

    USGS Publications Warehouse

    Sleeter, Benjamin M.; Calzia, James P.; Walter, Stephen R.

    2012-01-01

    The map depicts both active and inactive faults and earthquakes magnitude 1.5 to 7.3 in southern California (1970–2010). The bathymetry was generated from digital files from the California Department of Fish And Game, Marine Region, Coastal Bathymetry Project. Elevation data are from the U.S. Geological Survey National Elevation Database. Landsat satellite image is from fourteen Landsat 5 Thematic Mapper scenes collected between 2009 and 2010. Fault data are reproduced with permission from 2006 California Geological Survey and U.S. Geological Survey data. The earthquake data are from the U.S. Geological Survey National Earthquake Information Center.

  2. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  3. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  4. Transient postseismic mantle relaxation following 2004 Sumatra earthquake: implications of seismic vulnerability in the Andaman-Nicobar region

    NASA Astrophysics Data System (ADS)

    Reddy, C. D.; Prajapati, S. K.; Sunil, P. S.; Arora, S. K.

    2012-02-01

    Throughout the world, the tsunami generation potential of some large under-sea earthquakes significantly contributes to regional seismic hazard, which gives rise to significant risk in the near-shore provinces where human settlements are in sizeable population, often referred to as coastal seismic risk. In this context, we show from the pertinent GPS data that the transient stresses generated by the viscoelastic relaxation process taking place in the mantle is capable of rupturing major faults by stress transfer from the mantle through the lower crust including triggering additional rupture on the other major faults. We also infer that postseismic relaxation at relatively large depths can push some of the fault segments to reactivation causing failure sequences. As an illustration to these effects, we consider in detail the earthquake sequence comprising six events, starting from the main event of Mw = 7.5, on 10 August 2009 and tapering off to a small earthquake of Mw = 4.5 on 2 February 2011 over a period of eighteen months in the intensely seismic Andaman Islands between India and Myanmar. The persisting transient stresses, spatio-temporal seismic pattern, modeled Coulomb stress changes, and the southward migration of earthquake activity has increased the probability of moderate earthquakes recurring in the northern Andaman region, particularly closer to or somewhat south of Diglipur.

  5. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  6. Cumulative co-seismic fault damage and feedbacks on earthquake rupture

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Aben, F. M.; Ostermeijer, G.; Rockwell, T. K.; Doan, M. L.

    2017-12-01

    The importance of the damage zone in the faulting and earthquake process is widely recognized, but our understanding of how damage zones are created, what their properties are, and how they feed back into the seismic cycle, is remarkably poorly known. Firstly, damaged rocks have reduced elastic moduli, cohesion and yield strength, which can cause attenuation and potentially non-linear wave propagation effects during ruptures. Secondly, damaged fault rocks are generally more permeable than intact rocks, and hence play a key role in the migration of fluids in and around fault zones over the seismic cycle. Finally, the dynamic generation of damage as the earthquake propagates can itself influence the dynamics of rupture propagation, by increasing the amount of energy dissipation, decreasing the rupture velocity, modifying the size of the earthquake, changing the efficiency of weakening mechanisms such as thermal pressurisation of pore fluids, and even generating seismic waves itself . All of these effects imply that a feedback exists between the damage imparted immediately after rupture propagation, at the early stages of fault slip, and the effects of that damage on subsequent ruptures dynamics. In recent years, much debate has been sparked by the identification of so-called `pulverized rocks' described on various crustal-scale faults, a type of intensely damaged fault rock which has undergone minimal shear strain, and the occurrence of which has been linked to damage induced by transient high strain-rate stress perturbations during earthquake rupture. Damage induced by such transient stresses, whether compressional or tensional, likely constitute heterogeneous modulations of the remote stresses that will impart significant changes on the strength, elastic and fluid flow properties of a fault zone immediately after rupture propagation, at the early stage of fault slip. In this contribution, we will demonstrate laboratory and field examples of two dynamic mechanisms

  7. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  8. Satellite Geodetic Constraints On Earthquake Processes: Implications of the 1999 Turkish Earthquakes for Fault Mechanics and Seismic Hazards on the San Andreas Fault

    NASA Technical Reports Server (NTRS)

    Reilinger, Robert

    2005-01-01

    Our principal activities during the initial phase of this project include: 1) Continued monitoring of postseismic deformation for the 1999 Izmit and Duzce, Turkey earthquakes from repeated GPS survey measurements and expansion of the Marmara Continuous GPS Network (MAGNET), 2) Establishing three North Anatolian fault crossing profiles (10 sitedprofile) at locations that experienced major surface-fault earthquakes at different times in the past to examine strain accumulation as a function of time in the earthquake cycle (2004), 3) Repeat observations of selected sites in the fault-crossing profiles (2005), 4) Repeat surveys of the Marmara GPS network to continue to monitor postseismic deformation, 5) Refining block models for the Marmara Sea seismic gap area to better understand earthquake hazards in the Greater Istanbul area, 6) Continuing development of models for afterslip and distributed viscoelastic deformation for the earthquake cycle. We are keeping close contact with MIT colleagues (Brad Hager, and Eric Hetland) who are developing models for S. California and for the earthquake cycle in general (Hetland, 2006). In addition, our Turkish partners at the Marmara Research Center have undertaken repeat, micro-gravity measurements at the MAGNET sites and have provided us estimates of gravity change during the period 2003 - 2005.

  9. Antarctic icequakes triggered by the 2010 Maule earthquake in Chile

    NASA Astrophysics Data System (ADS)

    Peng, Zhigang; Walter, Jacob I.; Aster, Richard C.; Nyblade, Andrew; Wiens, Douglas A.; Anandakrishnan, Sridhar

    2014-09-01

    Seismic waves from distant, large earthquakes can almost instantaneously trigger shallow micro-earthquakes and deep tectonic tremor as they pass through Earth's crust. Such remotely triggered seismic activity mostly occurs in tectonically active regions. Triggered seismicity is generally considered to reflect shear failure on critically stressed fault planes and is thought to be driven by dynamic stress perturbations from both Love and Rayleigh types of surface seismic wave. Here we analyse seismic data from Antarctica in the six hours leading up to and following the 2010 Mw 8.8 Maule earthquake in Chile. We identify many high-frequency seismic signals during the passage of the Rayleigh waves generated by the Maule earthquake, and interpret them as small icequakes triggered by the Rayleigh waves. The source locations of these triggered icequakes are difficult to determine owing to sparse seismic network coverage, but the triggered events generate surface waves, so are probably formed by near-surface sources. Our observations are consistent with tensile fracturing of near-surface ice or other brittle fracture events caused by changes in volumetric strain as the high-amplitude Rayleigh waves passed through. We conclude that cryospheric systems can be sensitive to large distant earthquakes.

  10. Distribution and Characteristics of Repeating Earthquakes in Northern California

    NASA Astrophysics Data System (ADS)

    Waldhauser, F.; Schaff, D. P.; Zechar, J. D.; Shaw, B. E.

    2012-12-01

    Repeating earthquakes are playing an increasingly important role in the study of fault processes and behavior, and have the potential to improve hazard assessment, earthquake forecast, and seismic monitoring capabilities. These events rupture the same fault patch repeatedly, generating virtually identical seismograms. In California, repeating earthquakes have been found predominately along the creeping section of the central San Andreas Fault, where they are believed to represent failing asperities on an otherwise creeping fault. Here, we use the northern California double-difference catalog of 450,000 precisely located events (1984-2009) and associated database of 2 billion waveform cross-correlation measurements to systematically search for repeating earthquakes across various tectonic regions. An initial search for pairs of earthquakes with high-correlation coefficients and similar magnitudes resulted in 4,610 clusters including a total of over 26,000 earthquakes. A subsequent double-difference re-analysis of these clusters resulted in 1,879 sequences (8,640 events) where a common rupture area can be resolved to the precision of a few tens of meters or less. These repeating earthquake sequences (RES) include between 3 and 24 events with magnitudes up to ML=4. We compute precise relative magnitudes between events in each sequence from differential amplitude measurements. Differences between these and standard coda-duration magnitudes have a standard deviation of 0.09. The RES occur throughout northern California, but RES with 10 or more events (6%) only occur along the central San Andreas and Calaveras faults. We are establishing baseline characteristics for each sequence, such as recurrence intervals and their coefficient of variation (CV), in order to compare them across tectonic regions. CVs for these clusters range from 0.002 to 2.6, indicating a range of behavior between periodic occurrence (CV~0), random occurrence, and temporal clustering. 10% of the RES

  11. Seismicity in the source areas of the 1896 and 1933 Sanriku earthquakes and implications for large near-trench earthquake faults

    NASA Astrophysics Data System (ADS)

    Obana, Koichiro; Nakamura, Yasuyuki; Fujie, Gou; Kodaira, Shuichi; Kaiho, Yuka; Yamamoto, Yojiro; Miura, Seiichi

    2018-03-01

    In the northern part of the Japan Trench, the 1933 Showa-Sanriku earthquake (Mw 8.4), an outer-trench, normal-faulting earthquake, occurred 37 yr after the 1896 Meiji-Sanriku tsunami earthquake (Mw 8.0), a shallow, near-trench, plate-interface rupture. Tsunamis generated by both earthquakes caused severe damage along the Sanriku coast. Precise locations of earthquakes in the source areas of the 1896 and 1933 earthquakes have not previously been obtained because they occurred at considerable distances from the coast in deep water beyond the maximum operational depth of conventional ocean bottom seismographs (OBSs). In 2015, we incorporated OBSs designed for operation in deep water (ultradeep OBSs) in an OBS array during two months of seismic observations in the source areas of the 1896 and 1933 Sanriku earthquakes to investigate the relationship of seismicity there to outer-rise normal-faulting earthquakes and near-trench tsunami earthquakes. Our analysis showed that seismicity during our observation period occurred along three roughly linear trench-parallel trends in the outer-trench region. Seismic activity along these trends likely corresponds to aftershocks of the 1933 Showa-Sanriku earthquake and the Mw 7.4 normal-faulting earthquake that occurred 40 min after the 2011 Tohoku-Oki earthquake. Furthermore, changes of the clarity of reflections from the oceanic Moho on seismic reflection profiles and low-velocity anomalies within the oceanic mantle were observed near the linear trends of the seismicity. The focal mechanisms we determined indicate that an extensional stress regime extends to about 40 km depth, below which the stress regime is compressional. These observations suggest that rupture during the 1933 Showa-Sanriku earthquake did not extend to the base of the oceanic lithosphere and that compound rupture of multiple or segmented faults is a more plausible explanation for that earthquake. The source area of the 1896 Meiji-Sanriku tsunami earthquake is

  12. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  13. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  14. Analogue modelling of the rupture process of vulnerable stalagmites in an earthquake simulator

    NASA Astrophysics Data System (ADS)

    Gribovszki, Katalin; Bokelmann, Götz; Kovács, Károly; Hegymegi, Erika; Esterhazy, Sofi; Mónus, Péter

    2017-04-01

    Earthquakes hit urban centers in Europe infrequently, but occasionally with disastrous effects. Obtaining an unbiased view of seismic hazard is therefore very important. In principle, the best way to test Probabilistic Seismic Hazard Assessments (PSHA) is to compare them with observations that are entirely independent of the procedure used to produce PSHA models. Arguably, the most valuable information in this context should be information on long-term hazard, namely maximum intensities (or magnitudes) occurring over time intervals that are at least as long as a seismic cycle. Long-term information can in principle be gained from intact and vulnerable stalagmites in natural caves. These formations survived all earthquakes that have occurred, over thousands of years - depending on the age of the stalagmite. Their "survival" requires that the horizontal ground acceleration has never exceeded a certain critical value within that time period. To determine this critical value for the horizontal ground acceleration more precisely we need to understand the failure process of these intact and vulnerable stalagmites. More detailed information of the vulnerable stalagmites' rupture is required, and we have to know how much it depends on the shape and the substance of the investigated stalagmite. Predicting stalagmite failure limits using numerical modelling is faced with a number of approximations, e.g. from generating a manageable digital model. Thus it seemed reasonable to investigate the problem by analogue modelling as well. The advantage of analogue modelling among other things is that nearly real circumstances can be produced by simple and quick laboratory methods. The model sample bodies were made from different types of concrete and were cut out from real broken stalagmites originated from the investigated caves. These bodies were reduced-scaled with similar shape as the original, investigated stalagmites. During the measurements we could change both the shape and

  15. Source process and tectonic implication of the January 20, 2007 Odaesan earthquake, South Korea

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Ali K.; Kim, K. Y.; Fnais, M. S.; Al-Amri, A. M.

    2014-04-01

    The source process for the 20th of January 2007, Mw 4.5 Odaesan earthquake in South Korea is investigated in the low- and high-frequency bands, using velocity and acceleration waveform data recorded by the Korea Meteorological Administration Seismographic Network at distances less than 70 km from the epicenter. Synthetic Green functions are adopted for the low-frequency band of 0.1-0.3 Hz by using the wave-number integration technique and the one dimensional velocity model beneath the epicentral area. An iterative technique was performed by a grid search across the strike, dip, rake, and focal depth of rupture nucleation parameters to find the best-fit double-couple mechanism. To resolve the nodal plane ambiguity, the spatiotemporal slip distribution on the fault surface was recovered using a non-negative least-square algorithm for each set of the grid-searched parameters. The focal depth of 10 km was determined through the grid search for depths in the range of 6-14 km. The best-fit double-couple mechanism obtained from the finite-source model indicates a vertical strike-slip faulting mechanism. The NW faulting plane gives comparatively smaller root-mean-squares (RMS) error than its auxiliary plane. Slip pattern event provides simple source process due to the effect of Low-frequency that acted as a point source model. Three empirical Green functions are adopted to investigate the source process in the high-frequency band. A set of slip models was recovered on both nodal planes of the focal mechanism with various rupture velocities in the range of 2.0-4.0 km/s. Although there is a small difference between the RMS errors produced by the two orthogonal nodal planes, the SW dipping plane gives a smaller RMS error than its auxiliary plane. The slip distribution is relatively assessable by the oblique pattern recovered around the hypocenter in the high-frequency analysis; indicating a complex rupture scenario for such moderate-sized earthquake, similar to those reported

  16. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  17. Study on Earthquake Emergency Evacuation Drill Trainer Development

    NASA Astrophysics Data System (ADS)

    ChangJiang, L.

    2016-12-01

    With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.

  18. Has El Salvador Fault Zone produced M ≥ 7.0 earthquakes? The 1719 El Salvador earthquake

    NASA Astrophysics Data System (ADS)

    Canora, C.; Martínez-Díaz, J.; Álvarez-Gómez, J.; Villamor, P.; Ínsua-Arévalo, J.; Alonso-Henar, J.; Capote, R.

    2013-05-01

    Historically, large earthquakes, Mw ≥ 7.0, in the Εl Salvador area have been attributed to activity in the Cocos-Caribbean subduction zone. Τhis is correct for most of the earthquakes of magnitude greater than 6.5. However, recent paleoseismic evidence points to the existence of large earthquakes associated with rupture of the Εl Salvador Fault Ζone, an Ε-W oriented strike slip fault system that extends for 150 km through central Εl Salvador. Τo calibrate our results from paleoseismic studies, we have analyzed the historical seismicity of the area. In particular, we suggest that the 1719 earthquake can be associated with paleoseismic activity evidenced in the Εl Salvador Fault Ζone. Α reinterpreted isoseismal map for this event suggests that the damage reported could have been a consequence of the rupture of Εl Salvador Fault Ζone, rather than rupture of the subduction zone. Τhe isoseismal is not different to other upper crustal earthquakes in similar tectonovolcanic environments. We thus challenge the traditional assumption that only the subduction zone is capable of generating earthquakes of magnitude greater than 7.0 in this region. Τhis result has broad implications for future risk management in the region. Τhe potential occurrence of strong ground motion, significantly higher and closer to the Salvadorian populations that those assumed to date, must be considered in seismic hazard assessment studies in this area.

  19. Exploiting broadband seismograms and the mechanism of deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Jiao, Wenjie

    1997-09-01

    Modern broadband seismic instrumentation has provided enormous opportunities to retrieve the information in almost any frequency band of seismic interest. In this thesis, we have investigated the long period responses of the broadband seismometers and the problem of recovering actual groundmotion. For the first time, we recovered the static offset for an earthquake from dynamic seismograms. The very long period waves of near- and intermediate-field term from 1994 large Bolivian deep earthquake (depth = 630km, Msb{W}=8.2) and 1997 large Argentina deep earthquake (depth = 285km, Msb{W}=7.1) are successfully recovered from the portable broadband recordings by BANJO and APVC networks. These waves provide another dynamic window into the seismic source process and may provide unique information to help constrain the source dynamics of deep earthquakes in the future. We have developed a new method to locate global explosion events based on broadband waveform stacking and simulated annealing. This method utilizes the information provided by the full broadband waveforms. Instead of "picking times", the character of the wavelet is used for locating events. The application of this methodology to a Lop Nor nuclear explosion is very successful, and suggests a procedure for automatic monitoring. We have discussed the problem of deep earthquakes from the viewpoint of rock mechanics and seismology. The rupture propagation of deep earthquakes requires a slip-weakening process unlike that for shallow events. However, this process is not necessarily the same as the process which triggers the rupture. Partial melting due to stress release is developed to account for the slip-weakening process in the deep earthquake rupture. The energy required for partial melting in this model is on the same order of the maximum energy required for the slip-weakening process in the shallow earthquake rupture. However, the verification of this model requires experimental work on the thermodynamic

  20. Rupture processes of the 2013-2014 Minab earthquake sequence, Iran

    NASA Astrophysics Data System (ADS)

    Kintner, Jonas A.; Ammon, Charles J.; Cleveland, K. Michael; Herman, Matthew

    2018-06-01

    We constrain epicentroid locations, magnitudes and depths of moderate-magnitude earthquakes in the 2013-2014 Minab sequence using surface-wave cross-correlations, surface-wave spectra and teleseismic body-wave modelling. We estimate precise relative locations of 54 Mw ≥ 3.8 earthquakes using 48 409 teleseismic, intermediate-period Rayleigh and Love-wave cross-correlation measurements. To reduce significant regional biases in our relative locations, we shift the relative locations to align the Mw 6.2 main-shock centroid to a location derived from an independent InSAR fault model. Our relocations suggest that the events lie along a roughly east-west trend that is consistent with the faulting geometry in the GCMT catalogue. The results support previous studies that suggest the sequence consists of left-lateral strain release, but better defines the main-shock fault length and shows that most of the Mw ≥ 5.0 aftershocks occurred on one or two similarly oriented structures. We also show that aftershock activity migrated westwards along strike, away from the main shock, suggesting that Coulomb stress transfer played a role in the fault failure. We estimate the magnitudes of the relocated events using surface-wave cross-correlation amplitudes and find good agreement with the GCMT moment magnitudes for the larger events and underestimation of small-event size by catalogue MS. In addition to clarifying details of the Minab sequence, the results demonstrate that even in tectonically complex regions, relative relocation using teleseismic surface waves greatly improves the precision of relative earthquake epicentroid locations and can facilitate detailed tectonic analyses of remote earthquake sequences.

  1. Interactive Visualization to Advance Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn

    2008-04-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.

  2. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  3. A seismoacoustic study of the 2011 January 3 Circleville earthquake

    NASA Astrophysics Data System (ADS)

    Arrowsmith, Stephen J.; Burlacu, Relu; Pankow, Kristine; Stump, Brian; Stead, Richard; Whitaker, Rod; Hayward, Chris

    2012-05-01

    We report on a unique set of infrasound observations from a single earthquake, the 2011 January 3 Circleville earthquake (Mw 4.7, depth of 8 km), which was recorded by nine infrasound arrays in Utah. Based on an analysis of the signal arrival times and backazimuths at each array, we find that the infrasound arrivals at six arrays can be associated to the same source and that the source location is consistent with the earthquake epicentre. Results of propagation modelling indicate that the lack of associated arrivals at the remaining three arrays is due to path effects. Based on these findings we form the working hypothesis that the infrasound is generated by body waves causing the epicentral region to pump the atmosphere, akin to a baffled piston. To test this hypothesis, we have developed a numerical seismoacoustic model to simulate the generation of epicentral infrasound from earthquakes. We model the generation of seismic waves using a 3-D finite difference algorithm that accounts for the earthquake moment tensor, source time function, depth and local geology. The resultant acceleration-time histories on a 2-D grid at the surface then provide the initial conditions for modelling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field source pressure through the Ground-to-Space atmospheric model using a time-domain Parabolic Equation technique. By comparing the resultant predictions with the six epicentral infrasound observations from the 2011 January 3, Circleville earthquake, we show that the observations agree well with our predictions. The predicted and observed amplitudes are within a factor of 2 (on average, the synthetic amplitudes are a factor of 1.6 larger than the observed amplitudes). In addition, arrivals are predicted at all six arrays where signals are observed, and importantly not predicted at the remaining three arrays. Durations are typically predicted to within a factor of 2, and in some cases

  4. Source Rupture Process for the February 21, 2011, Mw6.1, New Zealand Earthquake and the Characteristics of Near-field Strong Ground Motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Shi, B.

    2011-12-01

    The New Zealand Earthquake of February 21, 2011, Mw 6.1 occurred in the South Island, New Zealand with the epicenter at longitude 172.70°E and latitude 43.58°S, and with depth of 5 km. The Mw 6.1 earthquake occurred on an unknown blind fault involving oblique-thrust faulting, which is 9 km away from southern of the Christchurch, the third largest city of New Zealand, with a striking direction from east toward west (United State Geology Survey, USGS, 2011). The earthquake killed at least 163 people and caused a lot of construction damages in Christchurch city. The Peak Ground Acceleration (PGA) observed at station Heathcote Valley Primary School (HVSC), which is 1 km away from the epicenter, is up to almost 2.0g. The ground-motion observation suggests that the buried earthquake source generates much higher near-fault ground motion. In this study, we have analyzed the earthquake source spectral parameters based on the strong motion observations, and estimated the near-fault ground motion based on the Brune's circular fault model. The results indicate that the larger ground motion may be caused by a higher dynamic stress drop,Δσd , or effect stress drop named by Brune, in the major source rupture region. In addition, a dynamical composite source model (DCSM) has been developed to simulate the near-fault strong ground motion with associated fault rupture properties from the kinematic point of view. For comparison purpose, we also conducted the broadband ground motion predictions for the station of HVSC; the synthetic seismogram of time histories produced for this station has good agreement with the observations in the waveforms, peak values and frequency contents, which clearly indicate that the higher dynamic stress drop during the fault rupture may play an important role to the anomalous ground-motion amplification. The preliminary simulated result illustrated in at Station HVSC is that the synthetics seismograms have a realistic appearance in the waveform and

  5. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  6. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  7. An integrated observational site for monitoring pre-earthquake processes in Peloponnese, Greece. Preliminary results.

    NASA Astrophysics Data System (ADS)

    Tsinganos, Kanaris; Karastathis, Vassilios K.; Kafatos, Menas; Ouzounov, Dimitar; Tselentis, Gerassimos; Papadopoulos, Gerassimos A.; Voulgaris, Nikolaos; Eleftheriou, Georgios; Mouzakiotis, Evangellos; Liakopoulos, Spyridon; Aspiotis, Theodoros; Gika, Fevronia; E Psiloglou, Basil

    2017-04-01

    We are presenting the first results of developing a new integrated observational site in Greece to study pre-earthquake processes in Peloponnese, lead by the National Observatory of Athens. We have developed a prototype of multiparameter network approach using an integrated system aimed at monitoring and thorough studies of pre-earthquake processes at the high seismicity area of the Western Hellenic Arc (SW Peloponnese, Greece). The initial prototype of the new observational systems consists of: (1) continuous real-time monitoring of Radon accumulation in the ground through a network of radon sensors, consisting of three gamma radiation detectors [NaI(Tl) scintillators], (2) nine-station seismic array installed to detect and locate events of low magnitude (less than 1.0 R) in the offshore area of the Hellenic arc, (3) real-time weather monitoring systems (air temperature, relative humidity, precipitation, pressure) and (4) satellite thermal radiation from AVHRR/NOAA-18 polar orbit sensing. The first few moths of operations revealed a number of pre-seismic radon variation anomalies before several earthquakes (M>3.6). The radon increases systematically before the larger events. For example a radon anomaly was predominant before the event of Sep 28, M 5.0 (36.73°N, 21.87°E), 18 km ESE of Methoni. The seismic array assists in the evaluation of current seismicity and may yield identification of foreshock activity. Thermal anomalies in satellite images are also examined as an additional tool for evaluation and verification of the Radon increase. According to the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) concept, atmospheric thermal anomalies observed before large seismic events are associated with the increase of Radon concentration on the ground. Details about the integrating ground and space observations, overall performance of the observational sites, future plans in advancing the cooperation in observations will be discussed.

  8. 25 April 2015 Gorkha Earthquake in Nepal Himalaya (Part 2)

    NASA Astrophysics Data System (ADS)

    Rao, N. Purnachandra; Burgmann, Roland; Mugnier, Jean-Louis; Gahalaut, Vineet; Pandey, Anand

    2017-06-01

    The response from the geosciences community working on Himalaya in general, and the 2015 Nepal earthquakes in specific, was overwhelming, and after a rigorous review process, thirteen papers were selected and published in Part-1. We are still left with a few good papers which are being brought out as Part-2 of the special issue. In the opening article Jean-Louis Mugnier and colleagues attempt to provide a structural geological perspective of the 25 April 2015 Gorkha earthquake and highlight the role of segmentation in generating the Himalayan mega-thrusts. They could infer segmentation by stable barriers in the HT that define barrier-type earthquake families. In yet another interesting piece of work, Pandey and colleagues map the crustal structure across the earthquake volume using Receiver function approach and infer a 5-km thick low velocity layer that connects to the MHT ramp. They are also able to correlate the rupture termination with the highest point of coseismic uplift. The last paper by Shen et al. highlights the usefulness of INSAR technique in mapping the coseismic slip distribution applied to the 25 April 2015 Gorkha earthquake. They infer low stress drop and corner frequency which coupled with hybrid modeling explain the low level of slip heterogeneity and frequency of ground motion. We compliment the journal of Asian Earth Sciences for bringing out the two volumes and do hope that these efforts have made a distinct impact on furthering our understanding of seismogenesis in Himalaya using the very latest data sets.

  9. Atypical soil behavior during the 2011 Tohoku earthquake ( Mw = 9)

    NASA Astrophysics Data System (ADS)

    Pavlenko, Olga V.

    2016-07-01

    To understand physical mechanisms of generation of abnormally high peak ground acceleration (PGA; >1 g) during the Tohoku earthquake, models of nonlinear soil behavior in the strong motion were constructed for 27 KiK-net stations located in the near-fault zones to the south of FKSH17. The method of data processing used was developed by Pavlenko and Irikura, Pure Appl Geophys 160:2365-2379, 2003 and previously applied for studying soil behavior at vertical array sites during the 1995 Kobe (Mw = 6.8) and 2000 Tottori (Mw = 6.7) earthquakes. During the Tohoku earthquake, we did not observe a widespread nonlinearity of soft soils and reduction at the beginning of strong motion and recovery at the end of strong motion of shear moduli in soil layers, as usually observed during strong earthquakes. Manifestations of soil nonlinearity and reduction of shear moduli during strong motion were observed at sites located close to the source, in coastal areas. At remote sites, where abnormally high PGAs were recorded, shear moduli in soil layers increased and reached their maxima at the moments of the highest intensity of the strong motion, indicating soil hardening. Then, shear moduli reduced with decreasing the intensity of the strong motion. At soft-soil sites, the reduction of shear moduli was accompanied by a step-like decrease of the predominant frequencies of motion. Evidently, the observed soil hardening at the moments of the highest intensity of the strong motion contributed to the occurrence of abnormally high PGA, recorded during the Tohoku earthquake.

  10. Tremor, remote triggering and earthquake cycle

    NASA Astrophysics Data System (ADS)

    Peng, Z.

    2012-12-01

    Deep tectonic tremor and episodic slow-slip events have been observed at major plate-boundary faults around the Pacific Rim. These events have much longer source durations than regular earthquakes, and are generally located near or below the seismogenic zone where regular earthquakes occur. Tremor and slow-slip events appear to be extremely stress sensitive, and could be instantaneously triggered by distant earthquakes and solid earth tides. However, many important questions remain open. For example, it is still not clear what are the necessary conditions for tremor generation, and how remote triggering could affect large earthquake cycle. Here I report a global search of tremor triggered by recent large teleseismic earthquakes. We mainly focus on major subduction zones around the Pacific Rim. These include the southwest and northeast Japan subduction zones, the Hikurangi subduction zone in New Zealand, the Cascadia subduction zone, and the major subduction zones in Central and South America. In addition, we examine major strike-slip faults around the Caribbean plate, the Queen Charlotte fault in northern Pacific Northwest Coast, and the San Andreas fault system in California. In each place, we first identify triggered tremor as a high-frequency non-impulsive signal that is in phase with the large-amplitude teleseismic waves. We also calculate the dynamic stress and check the triggering relationship with the Love and Rayleigh waves. Finally, we calculate the triggering potential with the local fault orientation and surface-wave incident angles. Our results suggest that tremor exists at many plate-boundary faults in different tectonic environments, and could be triggered by dynamic stress as low as a few kPas. In addition, we summarize recent observations of slow-slip events and earthquake swarms triggered by large distant earthquakes. Finally, we propose several mechanisms that could explain apparent clustering of large earthquakes around the world.

  11. Earthquake risk assessment of Alexandria, Egypt

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Gaber, Hanan; Ibrahim, Hamza

    2015-01-01

    Throughout historical and recent times, Alexandria has suffered great damage due to earthquakes from both near- and far-field sources. Sometimes, the sources of such damages are not well known. During the twentieth century, the city was shaken by several earthquakes generated from inland dislocations (e.g., 29 Apr. 1974, 12 Oct. 1992, and 28 Dec. 1999) and the African continental margin (e.g., 12 Sept. 1955 and 28 May 1998). Therefore, this study estimates the earthquake ground shaking and the consequent impacts in Alexandria on the basis of two earthquake scenarios. The simulation results show that Alexandria affected by both earthquakes scenarios relatively in the same manner despite the number of casualties during the first scenario (inland dislocation) is twice larger than the second one (African continental margin). An expected percentage of 2.27 from Alexandria's total constructions (12.9 millions, 2006 Census) will be affected, 0.19 % injuries and 0.01 % deaths of the total population (4.1 millions, 2006 Census) estimated by running the first scenario. The earthquake risk profile reveals that three districts (Al-Montazah, Al-Amriya, and Shark) lie in high seismic risks, two districts (Gharb and Wasat) are in moderate, and two districts (Al-Gomrok and Burg El-Arab) are in low seismic risk level. Moreover, the building damage estimations reflect that Al-Montazah is the highest vulnerable district whereas 73 % of expected damages were reported there. The undertaken analysis shows that the Alexandria urban area faces high risk. Informal areas and deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated earthquake risks (buildings damages) are concentrated at the most densely populated (Al-Montazah, Al-Amriya, and Shark) districts. Moreover, about 75 % of casualties are in the same districts.

  12. Spectral Estimation of Seismic Moment, Corner Frequency and Radiated Energy for Earthquakes in the Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Satriano, C.; Mejia Uquiche, A. R.; Saurel, J. M.

    2016-12-01

    The Lesser Antilles are situated at a convergent plate boundary where the North- and South-American plates subduct below the Caribbean Plate at a rate of about 2 cm/y. The subduction forms the volcanic arc of Lesser Antilles and generates three types of seismicity: subduction earthquakes at the plate interface, intermediate depth earthquakes within the subducting oceanic plates and crustal earthquakes associated with the deformation of the Caribbean Plate. Even if the seismicity rate is moderate, this zone has generated in the past major earthquakes, like the subduction event on February 8, 1843, estimated M 8.5 (Beauducel et Feuillet, 2012), the Mw 6.3 "Les Saintes" crustal earthquake of November 24, 2004 (Drouet et al., 2011), and the Mw 7.4 Martinique intermediate earthquake of November 29, 2007 (Bouin et al., 2010). The seismic catalogue produced by the Volcanological and Seismological Observatories of Guadeloupe and Martinique comprises about 1000 events per year, most of them of moderate magnitude (M < 5.0). The observation and characterization of this background seismicity has a fundamental role in understanding the processes of energy accumulation and liberation preparing major earthquakes. For this reason, the catalogue needs to be completed by information like seismic moment, corner frequency and radiated energy which give access to important fault properties like the rupture size, the static and the apparent stress drop. So far, this analysis has only been performed for the "Les Saintes" sequence (Drouet et al., 2011). Here we present a systematic study of the Lesser Antilles merged seismic catalogue (http://www.seismes-antilles.fr), between 2002 and 2013, using broadband data from the West Indies seismic network and recordings from the French Accelerometric Network. The analysis is aimed at determining, from the inversion of S-wave displacement spectra, source parameters like seismic moment, corner frequency and radiated energy, as well as the inelastic

  13. Research on response spectrum of dam based on scenario earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoliang; Zhang, Yushan

    2017-10-01

    Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.

  14. Investigating the rupture direction of induced earthquakes in the Central US using empirical Green's functions

    NASA Astrophysics Data System (ADS)

    Lui, S. K. Y.; Huang, Y.

    2017-12-01

    A clear understanding of the source physics of induced seismicity is the key to effective seismic hazard mitigation. In particular, resolving their rupture processes can shed lights on the stress state prior to the main shock, as well as ground motion response. Recent numerical models suggest that, compared to their tectonic counterpart, induced earthquake rupture is more prone to propagate unilaterally toward the injection well where fluid pressure is high. However, this is also dependent on the location of the injection relative to the fault and yet to be compared with field data. In this study, we utilize the rich pool of seismic data in the central US to constrain the rupture processes of major induced earthquakes. By implementing a forward-modeling method, we take smaller earthquake recordings as empirical Green's functions (eGf) to simulate the rupture direction of the beginning motion generated by large events. One advantage of the empirical approach is to bypass the fundamental difficulty in resolving path and site effects. We selected eGf events that are close to the target events both in space and time. For example, we use a Mw 3.6 aftershock approximately 3 km from the 2011 Mw 5.7 earthquake in Prague, OK as its eGf event. Preliminary results indicate a southwest rupture for the Prague main shock, which possibly implies a higher fluid pressure concentration on the northeast end of the fault prior to the rupture. We will present further results on other Mw > 4.5 earthquakes in the States of Oklahoma and Kansas. With additional seismic stations installed in the past few years, events such as the 2014 Mw 4.9 Milan earthquake and the 2016 Mw 5.8 Pawnee earthquake are potential candidates with useful eGfs, as they both have good data coverage and a substantial number of aftershocks nearby. We will discuss the implication of our findings for the causative relationships between the injection operations and the induced rupture process.

  15. Adaptive vibration control of structures under earthquakes

    NASA Astrophysics Data System (ADS)

    Lew, Jiann-Shiun; Juang, Jer-Nan; Loh, Chin-Hsiung

    2017-04-01

    techniques, for structural vibration suppression under earthquakes. Various control strategies have been developed to protect structures from natural hazards and improve the comfort of occupants in buildings. However, there has been little development of adaptive building control with the integration of real-time system identification and control design. Generalized predictive control, which combines the process of real-time system identification and the process of predictive control design, has received widespread acceptance and has been successfully applied to various test-beds. This paper presents a formulation of the predictive control scheme for adaptive vibration control of structures under earthquakes. Comprehensive simulations are performed to demonstrate and validate the proposed adaptive control technique for earthquake-induced vibration of a building.

  16. The Great Tohoku-Oki Earthquake and Tsunami of March 11, 2011 in Japan: A Critical Review and Evaluation of the Tsunami Source Mechanism

    NASA Astrophysics Data System (ADS)

    Pararas-Carayannis, George

    2014-12-01

    The great Tohoku-Oki earthquake of March 11, 2011 generated a very destructive and anomalously high tsunami. To understand its source mechanism, an examination was undertaken of the seismotectonics of the region and of the earthquake's focal mechanism, energy release, rupture patterns and spatial and temporal sequencing and clustering of major aftershocks. It was determined that the great tsunami resulted from a combination of crustal deformations of the ocean floor due to up-thrust tectonic motions, augmented by additional uplift due to the quake's slow and long rupturing process, as well as to large coseismic lateral movements which compressed and deformed the compacted sediments along the accretionary prism of the overriding plane. The deformation occurred randomly and non-uniformly along parallel normal faults and along oblique, en-echelon faults to the earthquake's overall rupture direction—the latter failing in a sequential bookshelf manner with variable slip angles. As the 1992 Nicaragua and the 2004 Sumatra earthquakes demonstrated, such bookshelf failures of sedimentary layers could contribute to anomalously high tsunamis. As with the 1896 tsunami, additional ocean floor deformation and uplift of the sediments was responsible for the higher waves generated by the 2011 earthquake. The efficiency of tsunami generation was greater along the shallow eastern segment of the fault off the Miyagi Prefecture where most of the energy release of the earthquake and the deformations occurred, while the segment off the Ibaraki Prefecture—where the rupture process was rapid—released less seismic energy, resulted in less compaction and deformation of sedimentary layers and thus to a tsunami of lesser offshore height. The greater tsunamigenic efficiency of the 2011 earthquake and high degree of the tsunami's destructiveness along Honshu's coastlines resulted from vertical crustal displacements of more than 10 m due to up-thrust faulting and from lateral compression

  17. Stochastic dynamic modeling of regular and slow earthquakes

    NASA Astrophysics Data System (ADS)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  18. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  19. Are landslides in the New Madrid Seismic Zone the result of the 1811-1812 earthquake sequence or multiple prehistoric earthquakes?

    NASA Astrophysics Data System (ADS)

    Gold, Ryan; Williams, Robert; Jibson, Randall

    2014-05-01

    Previous research indicates that deep translational and rotational landslides along the bluffs east of the Mississippi River in western Tennessee were triggered by the M7-8 1811-1812 New Madrid earthquake sequence. Analysis of recently acquired airborne LiDAR data suggests the possibility of multiple generations of landslides, possibly triggered by older, similar magnitude earthquake sequences, which paleoliquifaction studies show occurred circa 1450 and about 900 A.D. Using these LiDAR data, we have remapped recent landslides along two sections of the bluffs: a northern section near Reelfoot Lake and a southern section near Meeman-Shelby State Park (20 km north of Memphis, Tennessee). The bare-earth, digital-elevation models derived from these LiDAR data have a resolution of 0.5 m and reveal valuable details of topography given the region's dense forest canopy. Our mapping confirms much of the previous landslide mapping, refutes a few previously mapped landslides, and reveals new, undetected landslides. Importantly, we observe that the landslide deposits in the Reelfoot region are characterized by rotated blocks with sharp uphill-facing scarps and steep headwall scarps, indicating youthful, relatively recent movement. In comparison, landslide deposits near Meeman-Shelby are muted in appearance, with headwall scarps and rotated blocks that are extensively dissected by gullies, indicating they might be an older generation of landslides. Because of these differences in morphology, we hypothesize that the landslides near Reelfoot Lake were triggered by the 1811-1812 earthquake sequence and that landslides near Meeman-Shelby resulted from shaking associated with earlier earthquake sequences. To test this hypothesis, we will evaluate differences in bluff height, local geology, vegetation, and proximity to known seismic sources. Furthermore, planned fieldwork will help evaluate whether the observed landslide displacements occurred in single earthquakes or if they might

  20. Physics of Earthquake Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  1. Postseismic deformation following the 2015 Mw 7.8 Gorkha earthquake and the distribution of brittle and ductile crustal processes beneath Nepal

    NASA Astrophysics Data System (ADS)

    Moore, J. D. P.; Barbot, S.; Peng, D.; Yu, H.; Qiu, Q.; Wang, T.; Masuti, S.; Dauwels, J.; Lindsey, E. O.; Tang, C. H.; Feng, L.; Wei, S.; Hsu, Y. J.; Nanjundiah, P.; Lambert, V.; Antoine, S.

    2017-12-01

    Studies of geodetic data across the earthquake cycle indicate that a wide range of mechanisms contribute to cycles of stress buildup and relaxation. Both on-fault rate and state friction and off-fault rheologies can contribute to the observed deformation; in particular, during the postseismic transient phase of the earthquake cycle. We present a novel approach to simulate on-fault and off-fault deformation simultaneously using analytical Green's functions for distributed deformation at depth [Barbot, Moore and Lambert., 2017] and surface tractions, within an integral framework [Lambert & Barbot, 2016]. This allows us to jointly explore dynamic frictional properties on the fault, and the plastic properties of the bulk rocks (including grain size and water distribution) in the lower crust with low computational cost, whilst taking into account contributions from topography and a surface approximation for gravity. These displacement and stress Green's functions can be used for both forward and inverse modelling of distributed shear, where the calculated strain-rates can be converted to effective viscosities. Here, we draw insight from the postseismic geodetic observations following the 2015 Mw 7.8 Gorkha earthquake. We forward model afterslip using rate and state friction on the megathrust geometry with the two ramp-décollement system presented by Hubbard et al., (2016) and viscoelastic relaxation using recent experimentally derived flow laws with transient rheology and the thermal structure from Cattin et al. (2001). Multivariate posterior probability density functions for model parameters are generated by incorporating the forward model evaluation and comparison to geodetic observations into a Gaussian copula framework. In particular, we find that no afterslip occurred on the up-dip portion of the fault beneath Kathmandu. A combination of viscoelastic relaxation and down-dip afterslip is required to fit the data, taking into account the bi-directional coupling

  2. Implications of next generation attenuation ground motion prediction equations for site coefficients used in earthquake resistant design

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2014-01-01

    Proposals are developed to update Tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures published as American Society of Civil Engineers Structural Engineering Institute standard 7-10 (ASCE/SEI 7–10). The updates are mean next generation attenuation (NGA) site coefficients inferred directly from the four NGA ground motion prediction equations used to derive the maximum considered earthquake response maps adopted in ASCE/SEI 7–10. Proposals include the recommendation to use straight-line interpolation to infer site coefficients at intermediate values of (average shear velocity to 30-m depth). The NGA coefficients are shown to agree well with adopted site coefficients at low levels of input motion (0.1 g) and those observed from the Loma Prieta earthquake. For higher levels of input motion, the majority of the adopted values are within the 95% epistemic-uncertainty limits implied by the NGA estimates with the exceptions being the mid-period site coefficient, Fv, for site class D and the short-period coefficient, Fa, for site class C, both of which are slightly less than the corresponding 95% limit. The NGA data base shows that the median value  of 913 m/s for site class B is more typical than 760 m/s as a value to characterize firm to hard rock sites as the uniform ground condition for future maximum considered earthquake response ground motion estimates. Future updates of NGA ground motion prediction equations can be incorporated easily into future adjustments of adopted site coefficients using procedures presented herein. 

  3. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  4. Earthquake processes in the Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, region 1954-1959

    NASA Astrophysics Data System (ADS)

    Doser, Diane I.

    1986-11-01

    The 1954 Rainbow Mountain-Fairview Peak-Dixie Valley, Nevada, sequence produced the most extensive pattern of surface faults in the intermountain region in historic time. Five earthquakes of M>6.0 occurred during the first 6 months of the sequence, including the December 16, 1954, Fairview Peak (M = 7.1) and Dixie Valley (M = 6.8) earthquakes. Three 5.5≤M≤6.5 earthquakes occurred in the region in 1959, but none exhibited surface faulting. The results of the modeling suggest that the M>6.5 earthquakes of this sequence are complex events best fit by multiple source-time functions. Although the observed surface displacements for the July and August 1954 events showed only dip-slip motion, the fault plane solutions and waveform modeling suggest the earthquakes had significant components of right-lateral strike-slip motion (rakes of -135° to -145°). All of the earthquakes occurred along high-angle faults with dips of 40° to 70°. Seismic moments for individual subevents of the sequence range from 8.0 × 1017 to 2.5 × 1019 N m. Stress drops for the subevents, including the Fairview Peak subevents, were between 0.7 and 6.0 MPa.

  5. A new source process for evolving repetitious earthquakes at Ngauruhoe volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Jolly, A. D.; Neuberg, J.; Jousset, P.; Sherburn, S.

    2012-02-01

    Since early 2005, Ngauruhoe volcano has produced repeating low-frequency earthquakes with evolving waveforms and spectral features which become progressively enriched in higher frequency energy during the period 2005 to 2009, with the trend reversing after that time. The earthquakes also show a seasonal cycle since January 2006, with peak numbers of events occurring in the spring and summer period and lower numbers of events at other times. We explain these patterns by the excitation of a shallow two-phase water/gas or water/steam cavity having temporal variations in volume fraction of bubbles. Such variations in two-phase systems are known to produce a large range of acoustic velocities (2-300 m/s) and corresponding changes in impedance contrast. We suggest that an increasing bubble volume fraction is caused by progressive heating of melt water in the resonant cavity system which, in turn, promotes the scattering excitation of higher frequencies, explaining both spectral shift and seasonal dependence. We have conducted a constrained waveform inversion and grid search for moment, position and source geometry for the onset of two example earthquakes occurring 17 and 19 January 2008, a time when events showed a frequency enrichment episode occurring over a period of a few days. The inversion and associated error analysis, in conjunction with an earthquake phase analysis show that the two earthquakes represent an excitation of a single source position and geometry. The observed spectral changes from a stationary earthquake source and geometry suggest that an evolution in both near source resonance and scattering is occurring over periods from days to months.

  6. Unusually large earthquakes inferred from tsunami deposits along the Kuril trench

    USGS Publications Warehouse

    Nanayama, F.; Satake, K.; Furukawa, R.; Shimokawa, K.; Atwater, B.F.; Shigeno, K.; Yamaki, S.

    2003-01-01

    The Pacific plate converges with northeastern Eurasia at a rate of 8-9 m per century along the Kamchatka, Kuril and Japan trenches. Along the southern Kuril trench, which faces the Japanese island of Hokkaido, this fast subduction has recurrently generated earthquakes with magnitudes of up to ???8 over the past two centuries. These historical events, on rupture segments 100-200 km long, have been considered characteristic of Hokkaido's plate-boundary earthquakes. But here we use deposits of prehistoric tsunamis to infer the infrequent occurrence of larger earthquakes generated from longer ruptures. Many of these tsunami deposits form sheets of sand that extend kilometres inland from the deposits of historical tsunamis. Stratigraphic series of extensive sand sheets, intercalated with dated volcanic-ash layers, show that such unusually large tsunamis occurred about every 500 years on average over the past 2,000-7,000 years, most recently ???350 years ago. Numerical simulations of these tsunamis are best explained by earthquakes that individually rupture multiple segments along the southern Kuril trench. We infer that such multi-segment earthquakes persistently recur among a larger number of single-segment events.

  7. A new 1649-1884 catalog of destructive earthquakes near Tokyo and implications for the long-term seismic process

    USGS Publications Warehouse

    Grunewald, E.D.; Stein, R.S.

    2006-01-01

    In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.

  8. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences

  9. Nucleation speed limit on remote fluid-induced earthquakes.

    PubMed

    Parsons, Tom; Malagnini, Luca; Akinci, Aybige

    2017-08-01

    Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth's crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes.

  10. Nucleation speed limit on remote fluid induced earthquakes

    USGS Publications Warehouse

    Parsons, Thomas E.; Akinci, Aybige; Malignini, Luca

    2017-01-01

    Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth’s crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes.

  11. MyShake - A smartphone app to detect earthquake

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.

    2015-12-01

    We designed an android app that harnesses the accelerometers in personal smartphones to record earthquake-shaking data for research, hazard information and warnings. The app has the function to distinguish earthquake shakings from daily human activities based on the different patterns behind the movements. It also can be triggered by the traditional earthquake early warning (EEW) system to record for a certain amount of time to collect earthquake data. When the app is triggered by the earthquake-like movements, it sends the trigger information back to our server which contains time and location of the trigger, at the same time, it stores the waveform data on local phone first, and upload to our server later. Trigger information from multiple phones will be processed in real time on the server to find the coherent signal to confirm the earthquakes. Therefore, the app provides the basis to form a smartphone seismic network that can detect earthquake and even provide warnings. A planned public roll-out of MyShake could collect millions of seismic recordings for large earthquakes in many regions around the world.

  12. Multiple seismogenic processes for high-frequency earthquakes at Katmai National Park, Alaska: Evidence from stress tensor inversions of fault-plane solutions

    USGS Publications Warehouse

    Moran, S.C.

    2003-01-01

    The volcanological significance of seismicity within Katmai National Park has been debated since the first seismograph was installed in 1963, in part because Katmai seismicity consists almost entirely of high-frequency earthquakes that can be caused by a wide range of processes. I investigate this issue by determining 140 well-constrained first-motion fault-plane solutions for shallow (depth < 9 km) earthquakes occuring between 1995 and 2001 and inverting these solutions for the stress tensor in different regions within the park. Earthquakes removed by several kilometers from the volcanic axis occur in a stress field characterized by horizontally oriented ??1 and ??3 axes, with ??1 rotated slightly (12??) relative to the NUVELIA subduction vector, indicating that these earthquakes are occurring in response to regional tectonic forces. On the other hand, stress tensors for earthquake clusters beneath several Katmai cluster volcanoes have vertically oriented ??1 axes, indicating that these events are occuring in response to local, not regional, processes. At Martin-Mageik, vertically oriented ??1 is most consistent with failure under edifice loading conditions in conjunction with localized pore pressure increases associated with hydrothermal circulation cells. At Trident-Novarupta, it is consistent with a number of possible models, including occurence along fractures formed during the 1912 eruption that now serve as horizontal conduits for migrating fluids and/or volatiles from nearby degassing and cooling magma bodies. At Mount Katmai, it is most consistent with continued seismicity along ring-fracture systems created in the 1912 eruption, perhaps enhanced by circulating hydrothermal fluids and/or seepage from the caldera-filling lake.

  13. Fragmentation of wall rock garnets during deep crustal earthquakes

    PubMed Central

    Austrheim, Håkon; Dunkel, Kristina G.; Plümper, Oliver; Ildefonse, Benoit; Liu, Yang; Jamtveit, Bjørn

    2017-01-01

    Fractures and faults riddle the Earth’s crust on all scales, and the deformation associated with them is presumed to have had significant effects on its petrological and structural evolution. However, despite the abundance of directly observable earthquake activity, unequivocal evidence for seismic slip rates along ancient faults is rare and usually related to frictional melting and the formation of pseudotachylites. We report novel microstructures from garnet crystals in the immediate vicinity of seismic slip planes that transected lower crustal granulites during intermediate-depth earthquakes in the Bergen Arcs area, western Norway, some 420 million years ago. Seismic loading caused massive dislocation formations and fragmentation of wall rock garnets. Microfracturing and the injection of sulfide melts occurred during an early stage of loading. Subsequent dilation caused pervasive transport of fluids into the garnets along a network of microfractures, dislocations, and subgrain and grain boundaries, leading to the growth of abundant mineral inclusions inside the fragmented garnets. Recrystallization by grain boundary migration closed most of the pores and fractures generated by the seismic event. This wall rock alteration represents the initial stages of an earthquake-triggered metamorphic transformation process that ultimately led to reworking of the lower crust on a regional scale. PMID:28261660

  14. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  15. Analysis of the tsunami generated by the MW 7.8 1906 San Francisco earthquake

    USGS Publications Warehouse

    Geist, E.L.; Zoback, M.L.

    1999-01-01

    We examine possible sources of a small tsunami produced by the 1906 San Francisco earthquake, recorded at a single tide gauge station situated at the opening to San Francisco Bay. Coseismic vertical displacement fields were calculated using elastic dislocation theory for geodetically constrained horizontal slip along a variety of offshore fault geometries. Propagation of the ensuing tsunami was calculated using a shallow-water hydrodynamic model that takes into account the effects of bottom friction. The observed amplitude and negative pulse of the first arrival are shown to be inconsistent with small vertical displacements (~4-6 cm) arising from pure horizontal slip along a continuous right bend in the San Andreas fault offshore. The primary source region of the tsunami was most likely a recently recognized 3 km right step in the San Andreas fault that is also the probable epicentral region for the 1906 earthquake. Tsunami models that include the 3 km right step with pure horizontal slip match the arrival time of the tsunami, but underestimate the amplitude of the negative first-arrival pulse. Both the amplitude and time of the first arrival are adequately matched by using a rupture geometry similar to that defined for the 1995 MW (moment magnitude) 6.9 Kobe earthquake: i.e., fault segments dipping toward each other within the stepover region (83??dip, intersecting at 10 km depth) and a small component of slip in the dip direction (rake=-172??). Analysis of the tsunami provides confirming evidence that the 1906 San Francisco earthquake initiated at a right step in a right-lateral fault and propagated bilaterally, suggesting a rupture initiation mechanism similar to that for the 1995 Kobe earthquake.

  16. Impact of earthquakes and their secondary environmental effects on public health

    NASA Astrophysics Data System (ADS)

    Mavroulis, Spyridon; Mavrouli, Maria; Lekkas, Efthymios; Tsakris, Athanassios

    2017-04-01

    Earthquakes are among the most impressive geological processes with destructive effects on humans, nature and infrastructures. Secondary earthquake environmental effects (EEE) are induced by the ground shaking and are classified into ground cracks, slope movements, dust clouds, liquefactions, hydrological anomalies, tsunamis, trees shaking and jumping stones. Infectious diseases (ID) emerging during the post-earthquake period are considered as secondary earthquake effects on public health. This study involved an extensive and systematic literature review of 121 research publications related to the public health impact of 28 earthquakes from 1980 to 2015 with moment magnitude (Mw) from 6.1 to 9.2 and their secondary EEE including landslides, liquefaction and tsunamis generated in various tectonic environments (extensional, transform, compressional) around the world (21 events in Asia, 5 in America and one each in Oceania and Europe). The inclusion criteria were the literature type comprising journal articles and official reports, the natural disaster type including earthquakes and their secondary EEE (landslides, liquefaction, tsunamis), the population type including humans and the outcome measures characterized by disease incidence increase. The potential post-earthquake ID are classified into 14 groups including respiratory (detected after 15 of 28 earthquakes, 53.57%), water-borne (15, 53.57%), skin (8, 28.57%), vector-borne (8, 28.57%) wound-related (6, 21.43%), blood-borne (4, 14.29%), pulmonary (4, 14.29%), fecal-oral (3, 10.71%), food-borne (3, 10.71%), fungal (3, 10.71%), parasitic (3, 10.71%), eye (1, 3.57%), mite-borne (1, 3.57%) and soil-borne (1, 3.57%) infections. Based on age and genre data available for 15 earthquakes, the most vulnerable population groups are males, young children (age ≤ 10 years) and adults (age ≥ 65 years). Cholera, pneumonia and tetanus are the deadliest post-earthquake ID. The risk factors leading not only to disease

  17. Earthquakes and strain in subhorizontal slabs

    NASA Astrophysics Data System (ADS)

    Brudzinski, Michael R.; Chen, Wang-Ping

    2005-08-01

    Using an extensive database of fault plane solutions and precise locations of hypocenters, we show that the classic patterns of downdip extension (DDE) or downdip compression (DDC) in subduction zones deteriorate when the dip of the slab is less than about 20°. This result is depth-independent, demonstrated by both intermediate-focus (depths from 70 to 300 km) and deep-focus (depths greater than 300 km) earthquakes. The absence of pattern in seismic strain in subhorizontal slabs also occurs locally over scales of about 10 km, as evident from a detailed analysis of a large (Mw 7.1) earthquake sequence beneath Fiji. Following the paradigm that a uniform strain of DDE/DDC results from sinking of the cold, dense slab as it encounters resistance from the highly viscous mantle at depth, breakdown of DDE/DDC in subhorizontal slabs reflects waning negative buoyancy ("slab pull") in the downdip direction. Our results place a constraint on the magnitude of slab pull that is required to dominate over localized sources of stress and to align seismic strain release in dipping slabs. Under the condition of a vanishing slab pull, eliminating the only obvious source of regional stress, the abundance of earthquakes in subhorizontal slabs indicates that a locally variable source of stress is both necessary and sufficient to sustain the accumulation of elastic strain required to generate intermediate- and deep-focus seismicity. Evidence is growing that the process of seismogenesis under high pressures, including localized sources of stress, is tied to the presence of petrologic anomalies.

  18. Real-time earthquake data feasible

    NASA Astrophysics Data System (ADS)

    Bush, Susan

    Scientists agree that early warning devices and monitoring of both Hurricane Hugo and the Mt. Pinatubo volcanic eruption saved thousands of lives. What would it take to develop this sort of early warning and monitoring system for earthquake activity?Not all that much, claims a panel assigned to study the feasibility, costs, and technology needed to establish a real-time earthquake monitoring (RTEM) system. The panel, drafted by the National Academy of Science's Committee on Seismology, has presented its findings in Real-Time Earthquake Monitoring. The recently released report states that “present technology is entirely capable of recording and processing data so as to provide real-time information, enabling people to mitigate somewhat the earthquake disaster.” RTEM systems would consist of two parts—an early warning system that would give a few seconds warning before severe shaking, and immediate postquake information within minutes of the quake that would give actual measurements of the magnitude. At this time, however, this type of warning system has not been addressed at the national level for the United States and is not included in the National Earthquake Hazard Reduction Program, according to the report.

  19. Earthquake Analysis (EA) Software for The Earthquake Observatories

    NASA Astrophysics Data System (ADS)

    Yanik, K.; Tezel, T.

    2009-04-01

    There are many software that can used for observe the seismic signals and locate the earthquakes, but some of them commercial and has technical support. For this reason, many seismological observatories developed and use their own seismological software packets which are convenient with their seismological network. In this study, we introduce our software which has some capabilities that it can read seismic signals and process and locate the earthquakes. This software is used by the General Directorate of Disaster Affairs Earthquake Research Department Seismology Division (here after ERD) and will improve according to the new requirements. ERD network consist of 87 seismic stations that 63 of them were equipped with 24 bite digital Guralp CMG-3T, 16 of them with analogue short period S-13-Geometrics and 8 of them 24 bite digital short period S-13j-DR-24 Geometrics seismometers. Data is transmitted with satellite from broadband stations, whereas leased line used from short period stations. Daily data archive capacity is 4 GB. In big networks, it is very important that observe the seismic signals and locate the earthquakes as soon as possible. This is possible, if they use software which was developed considering their network properties. When we started to develop a software for big networks as our, we recognized some realities that all known seismic format data should be read without any convert process, observing of the only selected stations and do this on the map directly, add seismic files with import command, establishing relation between P and S phase readings and location solutions, store in database and entering to the program with user name and password. In this way, we can prevent data disorder and repeated phase readings. There are many advantages, when data store on the database proxies. These advantages are easy access to data from anywhere using ethernet, publish the bulletin and catalogues using website, easily sending of short message (sms) and e

  20. Nature of Pre-Earthquake Phenomena and their Effects on Living Organisms

    PubMed Central

    Freund, Friedemann; Stolc, Viktor

    2013-01-01

    Simple Summary Earthquakes are invariably preceded by a period when stresses increase deep in the Earth. Animals appear to be able to sense impending seismic events. During build-up of stress, electronic charge carriers are activated deep below, called positive holes. Positive holes have unusual properties: they can travel fast and far into and through the surrounding rocks. As they flow, they generate ultralow frequency electromagnetic waves. When they arrive at the Earth surface, they can ionize the air. When they flow into water, they oxidize it to hydrogen peroxides. All these physical and chemical processes can have noticeable effects on animals. Abstract Earthquakes occur when tectonic stresses build up deep in the Earth before catastrophic rupture. During the build-up of stress, processes that occur in the crustal rocks lead to the activation of highly mobile electronic charge carriers. These charge carriers are able to flow out of the stressed rock volume into surrounding rocks. Such outflow constitutes an electric current, which generates electromagnetic (EM) signals. If the outflow occurs in bursts, it will lead to short EM pulses. If the outflow is continuous, the currents may fluctuate, generating EM emissions over a wide frequency range. Only ultralow and extremely low frequency (ULF/ELF) waves travel through rock and can reach the Earth surface. The outflowing charge carriers are (i) positively charged and (ii) highly oxidizing. When they arrive at the Earth surface from below, they build up microscopic electric fields, strong enough to field-ionize air molecules. As a result, the air above the epicentral region of an impending major earthquake often becomes laden with positive airborne ions. Medical research has long shown that positive airborne ions cause changes in stress hormone levels in animals and humans. In addition to the ULF/ELF emissions, positive airborne ions can cause unusual reactions among animals. When the charge carriers flow into

  1. Comparative study of two tsunamigenic earthquakes in the Solomon Islands: 2015 Mw 7.0 normal-fault and 2013 Santa Cruz Mw 8.0 megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Heidarzadeh, Mohammad; Harada, Tomoya; Satake, Kenji; Ishibe, Takeo; Gusman, Aditya Riadi

    2016-05-01

    The July 2015 Mw 7.0 Solomon Islands tsunamigenic earthquake occurred ~40 km north of the February 2013 Mw 8.0 Santa Cruz earthquake. The proximity of the two epicenters provided unique opportunities for a comparative study of their source mechanisms and tsunami generation. The 2013 earthquake was an interplate event having a thrust focal mechanism at a depth of 30 km while the 2015 event was a normal-fault earthquake occurring at a shallow depth of 10 km in the overriding Pacific Plate. A combined use of tsunami and teleseismic data from the 2015 event revealed the north dipping fault plane and a rupture velocity of 3.6 km/s. Stress transfer analysis revealed that the 2015 earthquake occurred in a region with increased Coulomb stress following the 2013 earthquake. Spectral deconvolution, assuming the 2015 tsunami as empirical Green's function, indicated the source periods of the 2013 Santa Cruz tsunami as 10 and 22 min.

  2. Variations in rupture process with recurrence interval in a repeated small earthquake

    USGS Publications Warehouse

    Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris

    1994-01-01

    In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and

  3. Seismic Moment, Seismic Energy, and Source Duration of Slow Earthquakes: Application of Brownian slow earthquake model to three major subduction zones

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Maury, Julie

    2018-04-01

    Tectonic tremors, low-frequency earthquakes, very low-frequency earthquakes, and slow slip events are all regarded as components of broadband slow earthquakes, which can be modeled as a stochastic process using Brownian motion. Here we show that the Brownian slow earthquake model provides theoretical relationships among the seismic moment, seismic energy, and source duration of slow earthquakes and that this model explains various estimates of these quantities in three major subduction zones: Japan, Cascadia, and Mexico. While the estimates for these three regions are similar at the seismological frequencies, the seismic moment rates are significantly different in the geodetic observation. This difference is ascribed to the difference in the characteristic times of the Brownian slow earthquake model, which is controlled by the width of the source area. We also show that the model can include non-Gaussian fluctuations, which better explains recent findings of a near-constant source duration for low-frequency earthquake families.

  4. Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake

    NASA Astrophysics Data System (ADS)

    Shebalin, P.; Baranov, S.

    2017-10-01

    We study aftershock sequences of six major earthquakes in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island earthquakes. For Kaikaoura earthquake, we assess the expected number of long-delayed large aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of large aftershocks, we presume a break of slope of the magnitude-frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed large aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-scale processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate large events months after the major earthquake. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major earthquakes. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.

  5. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  6. Catalog of earthquakes along the San Andreas fault system in Central California, July-September 1972

    USGS Publications Warehouse

    Wesson, R.L.; Meagher, K.L.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period July - September, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). Catalogs for the first and second quarters of 1972 have been prepared by Wessan and others (1972 a & b). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1254 earthquakes in Central California. Arrival times at 129 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 104 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB), the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the seismicity of a portion of central California in much greater detail.

  7. Long-Period Ground Motion due to Near-Shear Earthquake Ruptures

    NASA Astrophysics Data System (ADS)

    Koketsu, K.; Yokota, Y.; Hikima, K.

    2010-12-01

    Long-period ground motion has become an increasingly important consideration because of the recent rapid increase in the number of large-scale structures, such as high-rise buildings and large oil storage tanks. Large subduction-zone earthquakes and moderate to large crustal earthquakes can generate far-source long-period ground motions in distant sedimentary basins with the help of path effects. Near-fault long-period ground motions are generated, for the most part, by the source effects of forward rupture directivity (Koketsu and Miyake, 2008). This rupture directivity effect is the maximum in the direction of fault rupture when a rupture velocity is nearly equal to shear wave velocity around a source fault (Dunham and Archuleta, 2005). The near-shear rupture was found to occur during the 2008 Mw 7.9 Wenchuan earthquake at the eastern edge of the Tibetan plateau (Koketsu et al., 2010). The variance of waveform residuals in a joint inversion of teleseismic and strong motion data was the minimum when we adopted a rupture velocity of 2.8 km/s, which is close to the shear wave velocity of 2.6 km/s around the hypocenter. We also found near-shear rupture during the 2010 Mw 6.9 Yushu earthquake (Yokota et al., 2010). The optimum rupture velocity for an inversion of teleseismic data is 3.5 km/s, which is almost equal to the shear wave velocity around the hypocenter. Since, in addition, supershear rupture was found during the 2001 Mw 7.8 Central Kunlun earthquake (Bouchon and Vallee, 2003), such fast earthquake rupture can be a characteristic of the eastern Tibetan plateau. Huge damage in Yingxiu and Beichuan from the 2008 Wenchuan earthquake and damage heavier than expected in the county seat of Yushu from the medium-sized Yushu earthquake can be attributed to the maximum rupture directivity effect in the rupture direction due to near-shear earthquake ruptures.

  8. Gravitational body forces focus North American intraplate earthquakes

    USGS Publications Warehouse

    Levandowski, William Brower; Zellman, Mark; Briggs, Richard

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard.

  9. Gravitational body forces focus North American intraplate earthquakes

    PubMed Central

    Levandowski, Will; Zellman, Mark; Briggs, Rich

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard. PMID:28211459

  10. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  11. The earthquake disaster risk characteristic and the problem in the earthquake emergency rescue of mountainous southwestern Sichuan

    NASA Astrophysics Data System (ADS)

    Yuan, S.; Xin, C.; Ying, Z.

    2016-12-01

    In recent years, earthquake disaster occurred frequently in Chinese mainland, the secondary disaster which have been caused by it is more serious in mountainous region. Because of the influence of terrain and geological conditions, the difficulty of earthquake emergency rescue work greatly increased, rescue force is also urged. Yet, it has been studied less on earthquake emergency rescue in mountainous region, the research in existing equipment whether can meet the actual needs of local earthquake emergency rescue is poorly. This paper intends to discuss and solve these problems. Through the mountainous regions Ganzi and Liangshan states in Sichuan field research, we investigated the process of earthquake emergency response and the projects for rescue force after an earthquake, and we also collected and collated local rescue force based data. By consulting experts and statistical analyzing the basic data, there are mainly two problems: The first is about local rescue force, they are poorly equipped and lack in the knowledge of medical help or identify architectural structure. There are no countries to establish a sound financial investment protection mechanism. Also, rescue equipment's updates and maintenance; The second problem is in earthquake emergency rescue progress. In the complicated geologic structure of mountainous regions, traffic and communication may be interrupted by landslides and mud-rock flows after earthquake. The outside rescue force may not arrive in time, rescue equipment was transported by manpower. Because of unknown earthquake disaster information, the local rescue force was deployed unreasonable. From the above, the local government worker should analyze the characteristics of the earthquake disaster in mountainous regions, and research how to improve their earthquake emergency rescue ability. We think they can do that by strengthening and regulating the rescue force structure, enhancing the skills and knowledge, training rescue workers

  12. Lisbon 1755, a multiple-rupture earthquake

    NASA Astrophysics Data System (ADS)

    Fonseca, J. F. B. D.

    2017-12-01

    Lisbon earthquake. This may reflect the very long period of surface waves generated by the combined sources as a result of the delays between ruptures. Recognition of this new class of large intraplate earthquakes may pave the way to a better understanding of the mechanisms driving intraplate deformation.

  13. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  14. Two critical tests for the Critical Point earthquake

    NASA Astrophysics Data System (ADS)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of

  15. Larger earthquakes recur more periodically: New insights in the megathrust earthquake cycle from lacustrine turbidite records in south-central Chile

    NASA Astrophysics Data System (ADS)

    Moernaut, J.; Van Daele, M.; Fontijn, K.; Heirman, K.; Kempf, P.; Pino, M.; Valdebenito, G.; Urrutia, R.; Strasser, M.; De Batist, M.

    2018-01-01

    Historical and paleoseismic records in south-central Chile indicate that giant earthquakes on the subduction megathrust - such as in AD1960 (Mw 9.5) - reoccur on average every ∼300 yr. Based on geodetic calculations of the interseismic moment accumulation since AD1960, it was postulated that the area already has the potential for a Mw 8 earthquake. However, to estimate the probability of such a great earthquake to take place in the short term, one needs to frame this hypothesis within the long-term recurrence pattern of megathrust earthquakes in south-central Chile. Here we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800 yr. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII1/2 vs. ≥VI1/2) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII1/2) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI1/2 (139 ± 69yr). Moreover, distribution fitting and the coefficient of variation (CoV) of inter-event times indicate that the stronger earthquakes recur in a more periodic way (CoV: 0.32 vs. 0.5). Regional correlation of our multi-threshold shaking records with coastal paleoseismic data of complementary nature (tsunami, coseismic subsidence) suggests that the intensity ≥VII1/2 events repeatedly ruptured the same part of the megathrust over a distance of at least ∼300 km and can be assigned to Mw ≥ 8.6. We hypothesize that a zone of high plate locking - identified by geodetic studies and large slip in AD 1960 - acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. Our paleo-records indicate that Poissonian recurrence models are inadequate to describe large megathrust earthquake recurrence in south-central Chile. Moreover, they show an enhanced

  16. Aftereffects of Subduction-Zone Earthquakes: Potential Tsunami Hazards along the Japan Sea Coast.

    PubMed

    Minoura, Koji; Sugawara, Daisuke; Yamanoi, Tohru; Yamada, Tsutomu

    2015-10-01

    The 2011 Tohoku-Oki Earthquake is a typical subduction-zone earthquake and is the 4th largest earthquake after the beginning of instrumental observation of earthquakes in the 19th century. In fact, the 2011 Tohoku-Oki Earthquake displaced the northeast Japan island arc horizontally and vertically. The displacement largely changed the tectonic situation of the arc from compressive to tensile. The 9th century in Japan was a period of natural hazards caused by frequent large-scale earthquakes. The aseismic tsunamis that inflicted damage on the Japan Sea coast in the 11th century were related to the occurrence of massive earthquakes that represented the final stage of a period of high seismic activity. Anti-compressive tectonics triggered by the subduction-zone earthquakes induced gravitational instability, which resulted in the generation of tsunamis caused by slope failing at the arc-back-arc boundary. The crustal displacement after the 2011 earthquake infers an increased risk of unexpected local tsunami flooding in the Japan Sea coastal areas.

  17. The great Lisbon earthquake and tsunami of 1755: lessons from the recent Sumatra earthquakes and possible link to Plato's Atlantis

    NASA Astrophysics Data System (ADS)

    Gutscher, M.-A.

    2006-05-01

    Great earthquakes and tsunami can have a tremendous societal impact. The Lisbon earthquake and tsunami of 1755 caused tens of thousands of deaths in Portugal, Spain and NW Morocco. Felt as far as Hamburg and the Azores islands, its magnitude is estimated to be 8.5 9. However, because of the complex tectonics in Southern Iberia, the fault that produced the earthquake has not yet been clearly identified. Recently acquired data from the Gulf of Cadiz area (tomography, seismic profiles, high-resolution bathymetry, sampled active mud volcanoes) provide strong evidence for an active east dipping subduction zone beneath Gibraltar. Eleven out of 12 of the strongest earthquakes (M>8.5) of the past 100 years occurred along subduction zone megathrusts (including the December 2004 and March 2005 Sumatra earthquakes). Thus, it appears likely that the 1755 earthquake and tsunami were generated in a similar fashion, along the shallow east-dipping subduction fault plane. This implies that the Cadiz subduction zone is locked (like the Cascadia and Nankai/Japan subduction zones), with great earthquakes occurring over long return periods. Indeed, the regional paleoseismic record (contained in deep-water turbidites and shallow lagoon deposits) suggests great earthquakes off South West Iberia every 1500 2000 years. Tsunami deposits indicate an earlier great earthquake struck SW Iberia around 200 BC, as noted by Roman records from Cadiz. A written record of even older events may also exist. According to Plato's dialogues The Critias and The Timaeus, Atlantis was destroyed by ‘strong earthquakes and floods … in a single day and night’ at a date given as 11,600 BP. A 1 m thick turbidite deposit, containing coarse grained sediments from underwater avalanches, has been dated at 12,000 BP and may correspond to the destructive earthquake and tsunami described by Plato. The effects on a paleo-island (Spartel) in the straits of Gibraltar would have been devastating, if inhabited, and may

  18. Automatic Earthquake Detection by Active Learning

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  19. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  20. Children's emotional experience two years after an earthquake: An exploration of knowledge of earthquakes and associated emotions

    PubMed Central

    Burro, Roberto; Hall, Rob

    2017-01-01

    A major earthquake has a potentially highly traumatic impact on children’s psychological functioning. However, while many studies on children describe negative consequences in terms of mental health and psychiatric disorders, little is known regarding how the developmental processes of emotions can be affected following exposure to disasters. Objectives We explored whether and how the exposure to a natural disaster such as the 2012 Emilia Romagna earthquake affected the development of children’s emotional competence in terms of understanding, regulating, and expressing emotions, after two years, when compared with a control group not exposed to the earthquake. We also examined the role of class level and gender. Method The sample included two groups of children (n = 127) attending primary school: The experimental group (n = 65) experienced the 2012 Emilia Romagna earthquake, while the control group (n = 62) did not. The data collection took place two years after the earthquake, when children were seven or ten-year-olds. Beyond assessing the children’s understanding of emotions and regulating abilities with standardized instruments, we employed semi-structured interviews to explore their knowledge of earthquakes and associated emotions, and a structured task on the intensity of some target emotions. Results We applied Generalized Linear Mixed Models. Exposure to the earthquake did not influence the understanding and regulation of emotions. The understanding of emotions varied according to class level and gender. Knowledge of earthquakes, emotional language, and emotions associated with earthquakes were, respectively, more complex, frequent, and intense for children who had experienced the earthquake, and at increasing ages. Conclusions Our data extend the generalizability of theoretical models on children’s psychological functioning following disasters, such as the dose-response model and the organizational-developmental model for child resilience, and

  1. Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms

    NASA Astrophysics Data System (ADS)

    Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng

    2013-04-01

    Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.

  2. Catalog of earthquakes along the San Andreas fault system in Central California: January-March, 1972

    USGS Publications Warehouse

    Wesson, R.L.; Bennett, R.E.; Meagher, K.L.

    1973-01-01

    Numerous small earthquakes occur each day in the Coast Ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period January - March, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b,c,d). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1,718 earthquakes in Central California. Of particular interest is a sequence of earthquakes in the Bear Valley area which contained single shocks with local magnitudes of S.O and 4.6. Earthquakes from this sequence make up roughly 66% of the total and are currently the subject of an interpretative study. Arrival times at 118 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 94 are telemetered stations operated by NCER. Readings from the remaining 24 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley,have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the

  3. Catalog of earthquakes along the San Andreas fault system in Central California, April-June 1972

    USGS Publications Warehouse

    Wesson, R.L.; Bennett, R.E.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period April - June, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). A catalog for the first quarter of 1972 has been prepared by Wesson and others (1972). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 910 earthquakes in Central California. A substantial portion of the earthquakes reported in this catalog represents a continuation of the sequence of earthquakes in the Bear Valley area which began in February, 1972 (Wesson and others, 1972). Arrival times at 126 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 101 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement

  4. Earthquake rupture process recreated from a natural fault surface

    USGS Publications Warehouse

    Parsons, Thomas E.; Minasian, Diane L.

    2015-01-01

    What exactly happens on the rupture surface as an earthquake nucleates, spreads, and stops? We cannot observe this directly, and models depend on assumptions about physical conditions and geometry at depth. We thus measure a natural fault surface and use its 3D coordinates to construct a replica at 0.1 m resolution to obviate geometry uncertainty. We can recreate stick-slip behavior on the resulting finite element model that depends solely on observed fault geometry. We clamp the fault together and apply steady state tectonic stress until seismic slip initiates and terminates. Our recreated M~1 earthquake initiates at contact points where there are steep surface gradients because infinitesimal lateral displacements reduce clamping stress most efficiently there. Unclamping enables accelerating slip to spread across the surface, but the fault soon jams up because its uneven, anisotropic shape begins to juxtapose new high-relief sticking points. These contacts would ultimately need to be sheared off or strongly deformed before another similar earthquake could occur. Our model shows that an important role is played by fault-wall geometry, though we do not include effects of varying fluid pressure or exotic rheologies on the fault surfaces. We extrapolate our results to large fault systems using observed self-similarity properties, and suggest that larger ruptures might begin and end in a similar way, though the scale of geometrical variation in fault shape that can arrest a rupture necessarily scales with magnitude. In other words, fault segmentation may be a magnitude dependent phenomenon and could vary with each subsequent rupture.

  5. Do I Really Sound Like That? Communicating Earthquake Science Following Significant Earthquakes at the NEIC

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Earle, P. S.; Benz, H.; Wald, D. J.; Yeck, W. L.

    2017-12-01

    The U.S. Geological Survey's National Earthquake Information Center (NEIC) responds to about 160 magnitude 6.0 and larger earthquakes every year and is regularly inundated with information requests following earthquakes that cause significant impact. These requests often start within minutes after the shaking occurs and come from a wide user base including the general public, media, emergency managers, and government officials. Over the past several years, the NEIC's earthquake response has evolved its communications strategy to meet the changing needs of users and the evolving media landscape. The NEIC produces a cascade of products starting with basic hypocentral parameters and culminating with estimates of fatalities and economic loss. We speed the delivery of content by prepositioning and automatically generating products such as, aftershock plots, regional tectonic summaries, maps of historical seismicity, and event summary posters. Our goal is to have information immediately available so we can quickly address the response needs of a particular event or sequence. This information is distributed to hundreds of thousands of users through social media, email alerts, programmatic data feeds, and webpages. Many of our products are included in event summary posters that can be downloaded and printed for local display. After significant earthquakes, keeping up with direct inquiries and interview requests from TV, radio, and print reports is always challenging. The NEIC works with the USGS Office of Communications and the USGS Science Information Services to organize and respond to these requests. Written executive summaries reports are produced and distributed to USGS personnel and collaborators throughout the country. These reports are updated during the response to keep our message consistent and information up to date. This presentation will focus on communications during NEIC's rapid earthquake response but will also touch on the broader USGS traditional and

  6. Tsunami hazards to U.S. coasts from giant earthquakes in Alaska

    USGS Publications Warehouse

    Ryan, Holly F.; von Huene, Roland E.; Scholl, Dave; Kirby, Stephen

    2012-01-01

    In the aftermath of Japan's devastating 11 March 2011Mw 9.0 Tohoku earthquake and tsunami, scientists are considering whether and how a similar tsunami could be generated along the Alaskan-Aleutian subduction zone (AASZ). A tsunami triggered by an earthquake along the AASZ would cross the Pacific Ocean and cause extensive damage along highly populated U.S. coasts, with ports being particularly vulnerable. For example, a tsunami in 1946 generated by a Mw 8.6 earthquake near Unimak Pass, Alaska (Figure 1a), caused significant damage along the U.S. West Coast, took 150 lives in Hawaii, and inundated shorelines of South Pacific islands and Antarctica [Fryer et al., 2004; Lopez and Okal, 2006]. The 1946 tsunami occurred before modern broadband seismometers were in place, and the mechanisms that created it remain poorly understood.

  7. Electromagnetic earthquake triggering phenomena: State-of-the-art research and future developments

    NASA Astrophysics Data System (ADS)

    Zeigarnik, Vladimir; Novikov, Victor

    2014-05-01

    Developed in the 70s of the last century in Russia unique pulsed power systems based on solid propellant magneto-hydrodynamic (MHD) generators with an output of 10-500 MW and operation duration of 10 to 15 s were applied for an active electromagnetic monitoring of the Earth's crust to explore its deep structure, oil and gas electrical prospecting, and geophysical studies for earthquake prediction due to their high specific power parameters, portability, and a capability of operation under harsh climatic conditions. The most interesting and promising results were obtained during geophysical experiments at the test sites located at Pamir and Northern Tien Shan mountains, when after 1.5-2.5 kA electric current injection into the Earth crust through an 4 km-length emitting dipole the regional seismicity variations were observed (increase of number of weak earthquakes within a week). Laboratory experiments performed by different teams of the Institute of Physics of the Earth, Joint Institute for High Temperatures, and Research Station of Russian Academy of Sciences on observation of acoustic emission behavior of stressed rock samples during their processing by electric pulses demonstrated similar patterns - a burst of acoustic emission (formation of cracks) after application of current pulse to the sample. Based on the field and laboratory studies it was supposed that a new kind of earthquake triggering - electromagnetic initiation of weak seismic events has been observed, which may be used for the man-made electromagnetic safe release of accumulated tectonic stresses and, consequently, for earthquake hazard mitigation. For verification of this hypothesis some additional field experiments were carried out at the Bishkek geodynamic proving ground with application of pulsed ERGU-600 facility, which provides 600 A electric current in the emitting dipole. An analysis of spatio-temporal redistribution of weak regional seismicity after ERGU-600 pulses, as well as a response

  8. Driving Processes of Earthquake Swarms: Evidence from High Resolution Seismicity

    NASA Astrophysics Data System (ADS)

    Ellsworth, W. L.; Shelly, D. R.; Hill, D. P.; Hardebeck, J.; Hsieh, P. A.

    2017-12-01

    Earthquake swarms are transient increases in seismicity deviating from a typical mainshock-aftershock pattern. Swarms are most prevalent in volcanic and hydrothermal areas, yet also occur in other environments, such as extensional fault stepovers. Swarms provide a valuable opportunity to investigate source zone physics, including the causes of their swarm-like behavior. To gain insight into this behavior, we have used waveform-based methods to greatly enhance standard seismic catalogs. Depending on the application, we detect and precisely relocate 2-10x as many events as included in the initial catalog. Recently, we have added characterization of focal mechanisms (applied to a 2014 swarm in Long Valley Caldera, California), addressing a common shortcoming in microseismicity analyses (Shelly et al., JGR, 2016). In analysis of multiple swarms (both within and outside volcanic areas), several features stand out, including: (1) dramatic expansion of the active source region with time, (2) tendency for events to occur on the immediate fringe of prior activity, (3) overall upward migration, and (4) complex faulting structure. Some swarms also show an apparent mismatch between seismicity orientations (as defined by patterns in hypocentral locations) and slip orientations (as inferred from focal mechanisms). These features are largely distinct from those observed in mainshock-aftershock sequences. In combination, these swarm behaviors point to an important role for fluid pressure diffusion. Swarms may in fact be generated by a cascade of fluid pressure diffusion and stress transfer: in cases where faults are critically stressed, an increase in fluid pressure will trigger faulting. Faulting will in turn dramatically increase permeability in the faulted area, allowing rapid equilibration of fluid pressure to the fringe of the rupture zone. This process may perpetuate until fluid pressure perturbations drop and/or stresses become further from failure, such that any

  9. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  10. Kinematic Source Rupture Process of the 2008 Iwate-Miyagi Nairiku Earthquake, a MW6.9 thrust earthquake in northeast Japan, using Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2008-12-01

    The 2008 Iwate-Miyagi Nairiku earthquake (MJMA7.2) on June 14, 2008, is a thrust type inland crustal earthquake, which occurred in northeastern Honshu, Japan. In order to see strong motion generation process of this event, the source rupture process is estimated by the kinematic waveform inversion using strong motion data. Strong motion data of the K-NET and KiK-net stations and Aratozawa Dam are used. These stations are located 3-94 km from the epicenter. Original acceleration time histories are integrated into velocity and band- pass filtered between 0.05 and 1 Hz. For obtaining the detailed source rupture process, appropriate velocity structure model for Green's functions should be used. We estimated one dimensional velocity structure model for each strong motion station by waveform modeling of aftershock records. The elastic wave velocity, density, and Q-values for four sedimentary layers are assumed following previous studies. The thickness of each sedimentary layer depends on the station, which is estimated to fit the observed aftershock's waveforms by the optimization using the genetic algorithm. A uniform layered structure model is assumed for crust and upper mantle below the seismic bedrock. We succeeded to get a reasonable velocity structure model for each station to give a good fit of the main S-wave part in the observation of aftershocks. The source rupture process of the mainshock is estimated by the linear kinematic waveform inversion using multiple time windows (Hartzell and Heaton, 1983). A fault plane model is assumed following the moment tensor solution by F-net, NIED. The strike and dip angle is 209° and 51°, respectively. The rupture starting point is fixed at the hypocenter located by the JMA. The obtained source model shows a large slip area in the shallow portion of the fault plane approximately 6 km southwest of the hypocenter. The rupture of the asperity finishes within about 9 s. This large slip area corresponds to the area with surface

  11. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  12. [Consumer's psychological processes of hoarding and avoidant purchasing after the Tohoku earthquake].

    PubMed

    Ohtomo, Shoji; Hirose, Yukio

    2014-02-01

    This study examined psychological processes of consumers that had determined hoarding and avoidant purchasing behaviors after the Tohoku earthquake within a dual-process model. The model hypothesized that both intentional motivation based on reflective decision and reactive motivation based on non-reflective decision predicted the behaviors. This study assumed that attitude, subjective norm and descriptive norm in relation to hoarding and avoidant purchasing were determinants of motivations. Residents in the Tokyo metropolitan area (n = 667) completed internet longitudinal surveys at three times (April, June, and November, 2011). The results indicated that intentional and reactive motivation determined avoidant purchasing behaviors in June; only intentional motivation determined the behaviors in November. Attitude was a main determinant of the motivations each time. Moreover, previous behaviors predicted future behaviors. In conclusion, purchasing behaviors were intentional rather than reactive behaviors. Furthermore, attitude and previous behaviors were important determinants in the dual-process model. Attitude and behaviors formed in April continued to strengthen the subsequent decisions of purchasing behavior.

  13. Disaster waste characteristics and radiation distribution as a result of the Great East Japan Earthquake.

    PubMed

    Shibata, Tomoyuki; Solo-Gabriele, Helena; Hata, Toshimitsu

    2012-04-03

    The compounded impacts of the catastrophes that resulted from the Great East Japan Earthquake have emphasized the need to develop strategies to respond to multiple types and sources of contamination. In Japan, earthquake and tsunami-generated waste were found to have elevated levels of metals/metalloids (e.g., mercury, arsenic, and lead) with separation and sorting more difficult for tsunami-generated waste as opposed to earthquake-generated waste. Radiation contamination superimposed on these disaster wastes has made it particularly difficult to manage the ultimate disposal resulting in delays in waste management. Work is needed to develop policies a priori for handling wastes from combined catastrophes such as those recently observed in Japan.

  14. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology

    NASA Astrophysics Data System (ADS)

    Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.

    2017-12-01

    Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We

  15. Source rupture process of the 2016 Kaikoura, New Zealand earthquake estimated from the kinematic waveform inversion of strong-motion data

    NASA Astrophysics Data System (ADS)

    Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo

    2018-03-01

    On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.

  16. Rheological behavior of the crust and mantle in subduction zones in the time-scale range from earthquake (minute) to mln years inferred from thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.

  17. Describing earthquakes potential through mountain building processes: an example within Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhang, Huai; Shi, Yaolin; Mary, Baptiste; Wang, Liangshu

    2016-04-01

    How to reconcile earthquake activities, for instance, the distributions of large-great event rupture areas and the partitioning of seismic-aseismic slips on the subduction interface, into geological mountain building period is critical in seismotectonics. In this paper, we try to scope this issue within a typical and special continental collisional mountain wedge within Himalayas across the 2015 Mw7.8 Nepal Himalaya earth- quake area. Based on the Critical Coulomb Wedge (CCW) theory, we show the possible predictions of large-great earthquake rupture locations by retrieving refined evolutionary sequences with clear boundary of coulomb wedge and creeping path inferred from interseismic deformation pattern along the megathrust-Main Himalaya Thrust (MHT). Due to the well-known thrusting architecture with constraints on the distribution of main exhumation zone and of the key evolutionary nodes, reasonable and refined (with 500 yr interval) thrusting sequences are retrieved by applying sequential limit analysis (SLA). We also use an illustration method-'G' gram to localize the relative positions of each fault within the tectonic wedge. Our model results show that at the early stage, during the initial wedge accumulation period, because of the small size of mountain wedge, there's no large earthquakes happens in this period. Whereas, in the following stage, the wedge is growing outward with occasionally out-of-sequence thrusting, four thrusting clusters (thrusting 'families') are clarified on the basis of the spatio-temporal distributions in the mountain wedge. Thrust family 4, located in the hinterland of the mountain wedge, absorbed the least amount of the total convergence, with no large earthquakes occurrence in this stage, contributing to the emplacement of the Greater Himalayan Complex. The slips absorbed by the remnant three thrust families result in large-great earthquakes rupturing in the Sub-Himalaya, Lesser Himalaya, and the front of Higher Himalaya. The

  18. Nucleation speed limit on remote fluid-induced earthquakes

    PubMed Central

    Parsons, Tom; Malagnini, Luca; Akinci, Aybige

    2017-01-01

    Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth’s crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes. PMID:28845448

  19. Transient triggering of near and distant earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Blanpied, M.L.; Beeler, N.M.

    1997-01-01

    We demonstrate qualitatively that frictional instability theory provides a context for understanding how earthquakes may be triggered by transient loads associated with seismic waves from near and distance earthquakes. We assume that earthquake triggering is a stick-slip process and test two hypotheses about the effect of transients on the timing of instabilities using a simple spring-slider model and a rate- and state-dependent friction constitutive law. A critical triggering threshold is implicit in such a model formulation. Our first hypothesis is that transient loads lead to clock advances; i.e., transients hasten the time of earthquakes that would have happened eventually due to constant background loading alone. Modeling results demonstrate that transient loads do lead to clock advances and that the triggered instabilities may occur after the transient has ceased (i.e., triggering may be delayed). These simple "clock-advance" models predict complex relationships between the triggering delay, the clock advance, and the transient characteristics. The triggering delay and the degree of clock advance both depend nonlinearly on when in the earthquake cycle the transient load is applied. This implies that the stress required to bring about failure does not depend linearly on loading time, even when the fault is loaded at a constant rate. The timing of instability also depends nonlinearly on the transient loading rate, faster rates more rapidly hastening instability. This implies that higher-frequency and/or longer-duration seismic waves should increase the amount of clock advance. These modeling results and simple calculations suggest that near (tens of kilometers) small/moderate earthquakes and remote (thousands of kilometers) earthquakes with magnitudes 2 to 3 units larger may be equally effective at triggering seismicity. Our second hypothesis is that some triggered seismicity represents earthquakes that would not have happened without the transient load (i

  20. Geodetic Imaging of the Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Tong, Xiaopeng

    In this dissertation I used Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS) to recover crustal deformation caused by earthquake cycle processes. The studied areas span three different types of tectonic boundaries: a continental thrust earthquake (M7.9 Wenchuan, China) at the eastern margin of the Tibet plateau, a mega-thrust earthquake (M8.8 Maule, Chile) at the Chile subduction zone, and the interseismic deformation of the San Andreas Fault System (SAFS). A new L-band radar onboard a Japanese satellite ALOS allows us to image high-resolution surface deformation in vegetated areas, which is not possible with older C-band radar systems. In particular, both the Wenchuan and Maule InSAR analyses involved L-band ScanSAR interferometry which had not been attempted before. I integrated a large InSAR dataset with dense GPS networks over the entire SAFS. The integration approach features combining the long-wavelength deformation from GPS with the short-wavelength deformation from InSAR through a physical model. The recovered fine-scale surface deformation leads us to better understand the underlying earthquake cycle processes. The geodetic slip inversion reveals that the fault slip of the Wenchuan earthquake is maximum near the surface and decreases with depth. The coseismic slip model of the Maule earthquake constrains the down-dip extent of the fault slip to be at 45 km depth, similar to the Moho depth. I inverted for the slip rate on 51 major faults of the SAFS using Green's functions for a 3-dimensional earthquake cycle model that includes kinematically prescribed slip events for the past earthquakes since the year 1000. A 60 km thick plate model with effective viscosity of 10 19 Pa · s is preferred based on the geodetic and geological observations. The slip rates recovered from the plate models are compared to the half-space model. The InSAR observation reveals that the creeping section of the SAFS is partially locked. This high

  1. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    level of shaking intensity with empirical models of fatality losses calibrated on past earthquakes in each country. Non-seismic detections and macroseismic questionnaires collected online are combined to identify as many as possible of the felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the US Geological Survey, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. All together, we estimate that the number of detected felt earthquakes is around 1 000 per year, compared with the 35 000 earthquakes annually reported by the EMSC! Felt events are already the subject of the web page "Latest significant earthquakes" on EMSC website (http://www.emsc-csem.org/Earthquake/significant_earthquakes.php) and of a dedicated Twitter service @LastQuake. We will present the identification process of the earthquakes that matter, the smartphone application itself (to be released in May) and its future evolutions.

  2. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  3. An improvement of the Earthworm Based Earthquake Alarm Reporting system in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, D. Y.; Hsiao, N. C.; Yih-Min, W.

    2017-12-01

    The Central Weather Bureau of Taiwan (CWB) has operated the Earthworm Based Earthquake Alarm Reporting (eBEAR) system for the purpose of earthquake early warning (EEW). The system has been used to report EEW messages to the general public since 2016 through text message from the mobile phones and the television programs. The system for inland earthquakes is able to provide accurate and fast warnings. The average epicenter error is about 5 km and the processing time is about 15 seconds. The epicenter error is defined as the distance between the epicenter estimated by the EEW system and the epicenter estimated by man. The processing time is defined as the time difference between the time earthquakes occurred and the time the system issued warning. The CWB seismic network consist about 200 seismic stations. In some area of Taiwan the distance between each seismic station is about 10 km. It means that when an earthquake occurred the seismic P wave is able to propagate through 6 stations, which is the minimum number of required stations in the EEW system, within 20 km. If the latency of data transmitting is about 1 sec, the P-wave velocity is about 6 km per sec and we take 3-sec length time window to estimate earthquake magnitude, then the processing should be around 8 sec. In fact, however, the average processing time is larger than this figure. Because some outliers of P-wave onset picks may exist in the beginning of the earthquake occurrence, the Geiger's method we used in the EEW system for earthquake location is not stable. It usually takes more time to wait for enough number of good picks. In this study we used grid search method to improve the estimations of earthquake location. The MAXEL algorithm (Sheen et al., 2015, 2016) was tested in the EEW system by simulating historical earthquakes occurred in Taiwan. The results show the processing time can be reduced and the location accuracy is acceptable for EEW purpose.

  4. The isolated 678-km deep 30 May 2015 MW 7.9 Ogasawara (Bonin) Islands earthquake

    NASA Astrophysics Data System (ADS)

    Ye, L.; Lay, T.; Zhan, Z.; Kanamori, H.; Hao, J.

    2015-12-01

    Deep-focus earthquakes, located 300 to 700 km below the Earth's surface within sinking slabs of relatively cold oceanic lithosphere, are mysterious phenomena. Seismic radiation from deep events is essentially indistinguishable from that for shallow stick-slip frictional-sliding earthquakes, but the confining pressure and temperature are so high for deep-focus events that a distinct process is likely needed to account for their abrupt energy release. The largest recorded deep-focus earthquake (MW 7.9) in the Izu-Bonin slab struck on 30 May 2015 beneath the Ogasawara (Bonin) Islands, isolated from prior seismicity by over 100 km in depth, and followed by only 2 small aftershocks. Globally, this is the deepest (678 km) major (MW > 7) earthquake in the seismological record. Seismicity indicates along-strike contortion of the Izu-Bonin slab, with horizontal flattening near a depth of 550 km in the Izu region and progressive steepening to near-vertical toward the south above the location of the 2015 event. Analyses of a large global data set of P, SH and pP seismic phases using short-period back-projection, subevent directivity, and broadband finite-fault inversion indicate that the mainshock ruptured a shallowly-dipping fault plane with patchy slip that spread over a distance of ~40 km with variable expansion rate (~5 km/s down-dip initially, ~3 km/s up-dip later). During the 17 s rupture duration the radiated energy was ~3.3 x 1016 J and the stress drop was ~38 MPa. The radiation efficiency is moderate (0.34), intermediate to that of the 1994 Bolivia and 2013 Sea of Okhotsk MW 8.3 earthquakes, indicating a continuum of processes. The isolated occurrence of the event suggests that localized stress concentration associated with the pronounced deformation of the Izu-Bonin slab likely played a role in generating this major earthquake.

  5. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  6. Insight into the rupture process of a rare tsunami earthquake from near-field high-rate GPS

    NASA Astrophysics Data System (ADS)

    Macpherson, K. A.; Hill, E. M.; Elosegui, P.; Banerjee, P.; Sieh, K. E.

    2011-12-01

    We investigated the rupture duration and velocity of the October 25, 2010 Mentawai earthquake by examining high-rate GPS displacement data. This Mw=7.8 earthquake appears to have ruptured either an up-dip part of the Sumatran megathrust or a fore-arc splay fault, and produced tsunami run-ups on nearby islands that were out of proportion with its magnitude. It has been described as a so-called "slow tsunami earthquake", characterised by a dearth of high-frequency signal and long rupture duration in low-strength, near-surface media. The event was recorded by the Sumatran GPS Array (SuGAr), a network of high-rate (1 sec) GPS sensors located on the nearby islands of the Sumatran fore-arc. For this study, the 1 sec time series from 8 SuGAr stations were selected for analysis due to their proximity to the source and high-quality recordings of both static displacements and dynamic waveforms induced by surface waves. The stations are located at epicentral distances of between 50 and 210 km, providing a unique opportunity to observe the dynamic source processes of a tsunami earthquake from near-source, high-rate GPS. We estimated the rupture duration and velocity by simulating the rupture using the spectral finite-element method SPECFEM and comparing the synthetic time series to the observed surface waves. A slip model from a previous study, derived from the inversion of GPS static offsets and tsunami data, and the CRUST2.0 3D velocity model were used as inputs for the simulations. Rupture duration and velocity were varied for a suite of simulations in order to determine the parameters that produce the best-fitting waveforms.

  7. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  8. Possible cause for an improbable earthquake: The 1997 MW 4.9 southern Alabama earthquake and hydrocarbon recovery

    USGS Publications Warehouse

    Gomberg, J.; Wolf, L.

    1999-01-01

    Circumstantial and physical evidence indicates that the 1997 MW 4.9 earthquake in southern Alabama may have been related to hydrocarbon recovery. Epicenters of this earthquake and its aftershocks were located within a few kilometers of active oil and gas extraction wells and two pressurized injection wells. Main shock and aftershock focal depths (2-6 km) are within a few kilometers of the injection and withdrawal depths. Strain accumulation at geologic rates sufficient to cause rupture at these shallow focal depths is not likely. A paucity of prior seismicity is difficult to reconcile with the occurrence of an earthquake of MW 4.9 and a magnitude-frequency relationship usually assumed for natural earthquakes. The normal-fault main-shock mechanism is consistent with reactivation of preexisting faults in the regional tectonic stress field. If the earthquake were purely tectonic, however, the question arises as to why it occurred on only the small fraction of a large, regional fault system coinciding with active hydrocarbon recovery. No obvious temporal correlation is apparent between the earthquakes and recovery activities. Although thus far little can be said quantitatively about the physical processes that may have caused the 1997 sequence, a plausible explanation involves the poroelastic response of the crust to extraction of hydrocarbons.

  9. Earthquakes and faults in the San Francisco Bay area (1970-2003)

    USGS Publications Warehouse

    Sleeter, Benjamin M.; Calzia, James P.; Walter, Stephen R.; Wong, Florence L.; Saucedo, George J.

    2004-01-01

    The map depicts both active and inactive faults and earthquakes magnitude 1.5 to 7.0 in the greater San Francisco Bay area. Twenty-two earthquakes magnitude 5.0 and greater are indicated on the map and listed chronologically in an accompanying table. The data are compiled from records from 1970-2003. The bathymetry was generated from a digital version of NOAA maps and hydrogeographic data for San Francisco Bay. Elevation data are from the USGS National Elevation Database. Landsat satellite image is from seven Landsat 7 Enhanced Thematic Mapper Plus scenes. Fault data are reproduced with permission from the California Geological Survey. The earthquake data are from the Northern California Earthquake Catalog.

  10. Inter-Disciplinary Validation of Pre Earthquake Signals. Case Study for Major Earthquakes in Asia (2004-2010) and for 2011 Tohoku Earthquake

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Hattori, K.; Liu, J.-Y.; Yang. T. Y.; Parrot, M.; Kafatos, M.; Taylor, P.

    2012-01-01

    We carried out multi-sensors observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several physical and environmental parameters, which we found, associated with the earthquake processes: thermal infrared radiation, temperature and concentration of electrons in the ionosphere, radon/ion activities, and air temperature/humidity in the atmosphere. We used satellite and ground observations and interpreted them with the Lithosphere-Atmosphere- Ionosphere Coupling (LAIC) model, one of possible paradigms we study and support. We made two independent continues hind-cast investigations in Taiwan and Japan for total of 102 earthquakes (M>6) occurring from 2004-2011. We analyzed: (1) ionospheric electromagnetic radiation, plasma and energetic electron measurements from DEMETER (2) emitted long-wavelength radiation (OLR) from NOAA/AVHRR and NASA/EOS; (3) radon/ion variations (in situ data); and 4) GPS Total Electron Content (TEC) measurements collected from space and ground based observations. This joint analysis of ground and satellite data has shown that one to six (or more) days prior to the largest earthquakes there were anomalies in all of the analyzed physical observations. For the latest March 11 , 2011 Tohoku earthquake, our analysis shows again the same relationship between several independent observations characterizing the lithosphere /atmosphere coupling. On March 7th we found a rapid increase of emitted infrared radiation observed from satellite data and subsequently an anomaly developed near the epicenter. The GPS/TEC data indicated an increase and variation in electron density reaching a maximum value on March 8. Beginning from this day we confirmed an abnormal TEC variation over the epicenter in the lower ionosphere. These findings revealed the existence of atmospheric and ionospheric phenomena occurring prior to the 2011 Tohoku earthquake, which indicated new evidence of a distinct

  11. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  12. Nucleation process and dynamic inversion of the Mw 6.9 Valparaíso 2017 earthquake in Central Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Aden-Antoniow, F.; Baez, J. C., Sr.; Otarola, C., Sr.; Potin, B.; DelCampo, F., Sr.; Poli, P.; Flores, C.; Satriano, C.; Felipe, L., Sr.; Madariaga, R. I.

    2017-12-01

    The Valparaiso 2017 sequence occurred in mega-thrust Central Chile, an active zone where the last mega-earthquake occurred in 1730. An intense seismicity occurred 2 days before of the Mw 6.9 main-shock. A slow trench ward movement observed in the coastal GPS antennas accompanied the foreshock seismicity. Following the Mw 6.9 earthquake the seismicity migrated 30 Km to South-East. This sequence was well recorded by multi-parametric stations composed by GPS, Broad-Band and Strong Motion instruments. We built a seismic catalogue with 2329 events associated to Valparaiso sequence, with a magnitude completeness of Ml 2.8. We located all the seismicity considering a new 3D velocity model obtained for the Valparaiso zone, and compute the moment tensor for events with magnitude larger than Ml 3.5, and finally studied the presence of repeating earthquakes. The main-shock is studied by performing a dynamic inversion using the strong motion records and an elliptical patch approach to characterize the rupture process. During the two days nucleation stage, we observe a compact zone of repeater events. In the meantime a westward GPS movement was recorded in the coastal GPS stations. The aseismic moment estimated from GPS is larger than the foreshocks cumulative moment, suggesting the presence of a slow slip event, which potentially triggered the 6.9 mainshock. The Mw 6.9 earthquake is associated to rupture of an elliptical asperity of semi-axis of 10 km and 5 km, with a sub-shear rupture, stress drop of 11.71 MPa, yield stress of 17.21 MPa, slip weakening of 0.65 m and kappa value of 1.70. This sequence occurs close to, and with some similar characteristics that 1985 Valparaíso Mw 8.0 earthquake. The rupture of this asperity could stress more the highly locked Central Chile zone where a mega-thrust earthquake like 1730 is expected.

  13. Possible worst-case tsunami scenarios around the Marmara Sea from combined earthquake and landslide sources

    NASA Astrophysics Data System (ADS)

    Latcharote, Panon; Suppasri, Anawat; Imamura, Fumihiko; Aytore, Betul; Yalciner, Ahmet Cevdet

    2016-12-01

    This study evaluates tsunami hazards in the Marmara Sea from possible worst-case tsunami scenarios that are from submarine earthquakes and landslides. In terms of fault-generated tsunamis, seismic ruptures can propagate along the North Anatolian Fault (NAF), which has produced historical tsunamis in the Marmara Sea. Based on the past studies, which consider fault-generated tsunamis and landslide-generated tsunamis individually, future scenarios are expected to generate tsunamis, and submarine landslides could be triggered by seismic motion. In addition to these past studies, numerical modeling has been applied to tsunami generation and propagation from combined earthquake and landslide sources. In this study, tsunami hazards are evaluated from both individual and combined cases of submarine earthquakes and landslides through numerical tsunami simulations with a grid size of 90 m for bathymetry and topography data for the entire Marmara Sea region and validated with historical observations from the 1509 and 1894 earthquakes. This study implements TUNAMI model with a two-layer model to conduct numerical tsunami simulations, and the numerical results show that the maximum tsunami height could reach 4.0 m along Istanbul shores for a full submarine rupture of the NAF, with a fault slip of 5.0 m in the eastern and western basins of the Marmara Sea. The maximum tsunami height for landslide-generated tsunamis from small, medium, and large of initial landslide volumes (0.15, 0.6, and 1.5 km3, respectively) could reach 3.5, 6.0, and 8.0 m, respectively, along Istanbul shores. Possible tsunamis from submarine landslides could be significantly higher than those from earthquakes, depending on the landslide volume significantly. These combined earthquake and landslide sources only result in higher tsunami amplitudes for small volumes significantly because of amplification within the same tsunami amplitude scale (3.0-4.0 m). Waveforms from all the coasts around the Marmara Sea

  14. Fault rupture process and strong ground motion simulation of the 2014/04/01 Northern Chile (Pisagua) earthquake (Mw8.2)

    NASA Astrophysics Data System (ADS)

    Pulido Hernandez, N. E.; Suzuki, W.; Aoi, S.

    2014-12-01

    A megathrust earthquake occurred in Northern Chile in April 1, 2014, 23:46 (UTC) (Mw 8.2), in a region that had not experienced a major earthquake since the great 1877 (~M8.6) event. This area had been already identified as a mature seismic gap with a strong interseismic coupling inferred from geodetic measurements (Chlieh et al., JGR, 2011 and Metois et al., GJI, 2013). We used 48 components of strong motion records belonging to the IPOC network in Northern Chile to investigate the source process of the M8.2 Pisagua earthquake. Acceleration waveforms were integrated to get velocities and filtered between 0.02 and 0.125 Hz. We assumed a single fault plane segment with an area of 180 km by 135 km, a strike of 357, and a dip of 18 degrees (GCMT). We set the starting point of rupture at the USGS hypocenter (19.610S, 70.769W, depth 25km), and employed a multi-time-window linear waveform inversion method (Hartzell and Heaton, BSSA, 1983), to derive the rupture process of the Pisagua earthquake. Our results show a slip model characterized by one large slip area (asperity) localized 50 km south of the epicenter, a peak slip of 10 m and a total seismic moment of 2.36 x 1021Nm (Mw 8.2). Fault rupture slowly propagated to the south in front of the main asperity for the initial 25 seconds, and broke it by producing a strong acceleration stage. The fault plane rupture velocity was in average 2.9 km/s. Our calculations show an average stress drop of 4.5MPa for the entire fault rupture area and 12MPa for the asperity area. We simulated the near-source strong ground motion records in a broad frequency band (0.1 ~ 20 Hz), to investigate a possible multi-frequency fault rupture process as the one observed in recent mega-thrust earthquakes such as the 2011 Tohoku-oki (M9.0). Acknowledgments Strong motion data was kindly provided by Chile University as well as the IPOC (Integrated Plate boundary Observatory Chile).

  15. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  16. 2011 Tohoku Earthquake and Japan's Nuclear Disaster - Implications for Indian Ocean Rim countries

    NASA Astrophysics Data System (ADS)

    Chadha, R. K.

    2011-12-01

    The Nuclear disaster in Japan after the M9.0 Tohoku earthquake on March 11, 2011 has elicited global response to have a relook at the safety aspects of the nuclear power plants from all angles including natural hazards like earthquakes and tsunami. Several countries have gone into safety audits of their nuclear programs in view of the experience in Japan. Tectonically speaking, countries located close to subduction zones or in direct line of impact of the subduction zones are the most vulnerable to earthquake or tsunami hazard, as these regions are the locale of great tsunamigenic earthquakes. The Japan disaster has also cautioned to the possibility of great impact to the critical structures along the coasts due to other ocean processes caused by ocean-atmosphere interactions and also due to global warming and sea level rise phenomena in future. This is particular true for island countries. The 2011 Tohoku earthquake in Japan will be remembered more because of its nuclear tragedy and tsunami rather than the earthquake itself. The disaster happened as a direct impact of a tsunami generated by the earthquake 130 km off the coast of Sendai in the Honshu region of Japan. The depth of the earthquake was about 25 km below the ocean floor and it occurred on a thrust fault causing a displacement of more than 20 meters. At few places, water is reported to have inundated areas up to 8-10 km inland. The height of the tsunami varied between 10 and 3 meters along the coast. Generally, during an earthquake damage to buildings or other structures occur due to strong shaking which is expressed in the form of ground accelerations 'g'. Although, Peak Ground Accelerations (PGA) consistently exceeded 2g at several places from Sendai down south, structures at the Fukushima Daiichi Nuclear Power Plant did not collapse due to the earthquake. In the Indian Ocean Rim countries, Indian, Pakistan and South Africa are the three countries where Nuclear power plants are operational, few of them

  17. Evolution of Mass Movements near Epicentre of Wenchuan Earthquake, the First Eight Years

    PubMed Central

    Zhang, Shuai; Zhang, Limin; Lacasse, Suzanne; Nadim, Farrokh

    2016-01-01

    It is increasingly clear that landslides represent a major cause of economic costs and deaths in earthquakes in mountains. In the Wenchuan earthquake case, post-seismic cascading landslides continue to represent a major problem eight years on. Failure to anticipate the impact of cascading landslides could lead to unexpected losses of human lives and properties. Previous studies tended to focus on separate landslide processes, with little attention paid to the quantification of long-term evolution of multiple processes or the evolution of mass movements. The very active mass movements near the epicentre of the Wenchuan earthquake provided us a unique opportunity to understand the complex processes of the evolving cascading landslides after a strong earthquake. This study budgets the mass movements on the hillslopes and in the channels in the first eight years since the Wenchuan earthquake and verify a conservation in mass movements. A system illustrating the evolution and interactions of mass movement after a strong earthquake is proposed. PMID:27824077

  18. Living with earthquakes - development and usage of earthquake-resistant construction methods in European and Asian Antiquity

    NASA Astrophysics Data System (ADS)

    Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes

    2010-05-01

    outermost layer was treated this way, the core of the shrines was made of simple rectangular blocks. The system resisted both in-plane and out-of-plane shaking quite well, as proven by survival of many shrines for more than a millennium, and by fracturing of blocks instead of displacement during the 2006 Yogyakarta earthquake. Systematic use or disuse of known earthquake-resistant techniques in any one society depends on the perception of earthquake risk and on available financial resources. Earthquake-resistant construction practice is significantly more expensive than regular construction. Perception is influenced mostly by short individual and longer social memory. If earthquake recurrence time is longer than the preservation of social memory, if damaging quakes fade into the past, societies commit the same construction mistakes again and again. Length of the memory is possibly about a generation's lifetime. Events occurring less frequently than 25-30 years can be readily forgotten, and the risk of recurrence considered as negligible, not worth the costs of safe construction practices. (Example of recurring flash floods in Hungary.) Frequent earthquakes maintain safe construction practices, like the Java masonry technique throughout at least two centuries, and like the Fachwerk tradition on Modern Aegean Samos throughout 500 years of political and technological development. (OTKA K67583)

  19. An Educator's Resource Guide to Earthquakes and Seismology

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Lahr, J. C.; Butler, R.

    2007-12-01

    When a major seismic event occurs, millions of people around the world want to understand what happened. This presents a challenge to many classroom science teachers not well versed in Earth science. In response to this challenge, teachers may try surfing the Internet to ferret out the basics. Following popular links can be time consuming and frustrating, so that the best use is not made of this "teachable moment." For isolated rural teachers with limited Internet access, surfing for information may not be a viable option. A partnership between EarthScope/USArray, High Lava Plains Project (Carnegie Institution/Arizona State University, Portland State University, and isolated K-12 schools in rural SE Oregon generated requests for a basic "Teachers Guide to Earthquakes." To bridge the inequalities in information access and varied science background, EarthScope/USArray sponsored the development of a CD that would be a noncommercial repository of Earth and earthquake-related science resources. A subsequent partnership between the University of Portland, IRIS, the USGS, and Portland-area school teachers defined the needs and provided the focus to organize sample video lectures, PowerPoint presentations, new Earth-process animations, and activities on a such a large range of topics that soon the capacity of a DVD was required. Information was culled from oft-referenced sources, always seeking clear descriptions of processes, basic classroom-tested instructional activities, and effective Web sites. Our format uses a master interactive PDF "book" that covers the basics, from the interior of the Earth and plate tectonics to seismic waves, with links to reference folders containing activities, new animations, and video demos. This work-in-progress DVD was initially aimed at middle school Earth-science curriculum, but has application throughout K-16. Strong support has come from university professors wanting an organized collection of seismology resources. The DVD shows how

  20. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  1. Earthquakes and Volcanic Processes at San Miguel Volcano, El Salvador, Determined from a Small, Temporary Seismic Network

    NASA Astrophysics Data System (ADS)

    Hernandez, S.; Schiek, C. G.; Zeiler, C. P.; Velasco, A. A.; Hurtado, J. M.

    2008-12-01

    The San Miguel volcano lies within the Central American volcanic chain in eastern El Salvador. The volcano has experienced at least 29 eruptions with Volcano Explosivity Index (VEI) of 2. Since 1970, however, eruptions have decreased in intensity to an average of VEI 1, with the most recent eruption occurring in 2002. Eruptions at San Miguel volcano consist mostly of central vent and phreatic eruptions. A critical challenge related to the explosive nature of this volcano is to understand the relationships between precursory surface deformation, earthquake activity, and volcanic activity. In this project, we seek to determine sub-surface structures within and near the volcano, relate the local deformation to these structures, and better understand the hazard that the volcano presents in the region. To accomplish these goals, we deployed a six station, broadband seismic network around San Miguel volcano in collaboration with researchers from Servicio Nacional de Estudios Territoriales (SNET). This network operated continuously from 23 March 2007 to 15 January 2008 and had a high data recovery rate. The data were processed to determine earthquake locations, magnitudes, and, for some of the larger events, focal mechanisms. We obtained high precision locations using a double-difference approach and identified at least 25 events near the volcano. Ongoing analysis will seek to identify earthquake types (e.g., long period, tectonic, and hybrid events) that occurred in the vicinity of San Miguel volcano. These results will be combined with radar interferometric measurements of surface deformation in order to determine the relationship between surface and subsurface processes at the volcano.

  2. Earthquake Advisory Services: A prototype development project

    NASA Astrophysics Data System (ADS)

    Lagorio, H. J.; Levin, H.

    1980-10-01

    Development of the prototype Earthquake Advisory Service (EAS) is reported. The EAS is designed to provide direct technical assistance and written materials to advise people who wish to make informed decisions about earthquake hazard reduction in their residences. It is intended also to be adapted to local conditions by community-based agencies. The EAS prototype involved the testing of early assumptions about program implementation, establishment of a systematic methodology review process, and a review of published information pertinent to the project. Operational procedures of the program and the process leading to implementation guidelines are described.

  3. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  4. Dynamic 3D simulations of earthquakes on en echelon faults

    USGS Publications Warehouse

    Harris, R.A.; Day, S.M.

    1999-01-01

    One of the mysteries of earthquake mechanics is why earthquakes stop. This process determines the difference between small and devastating ruptures. One possibility is that fault geometry controls earthquake size. We test this hypothesis using a numerical algorithm that simulates spontaneous rupture propagation in a three-dimensional medium and apply our knowledge to two California fault zones. We find that the size difference between the 1934 and 1966 Parkfield, California, earthquakes may be the product of a stepover at the southern end of the 1934 earthquake and show how the 1992 Landers, California, earthquake followed physically reasonable expectations when it jumped across en echelon faults to become a large event. If there are no linking structures, such as transfer faults, then strike-slip earthquakes are unlikely to propagate through stepovers >5 km wide. Copyright 1999 by the American Geophysical Union.

  5. Geochemical variation of groundwater in the Abruzzi region: earthquakes related signals?

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Chiodini, G.; Caliro, S.; Frondini, F.; Avino, R.; Minopoli, C.; Morgantini, N.

    2009-12-01

    The presence of a deep and inorganic source of CO2 has been recently recognized in Italy on the basis of the deeply derived carbon dissolved in the groundwater. In particular, the regional map of CO2 Earth degassing shows that two large degassing structures affect the Tyrrhenian side of the Italian peninsula. The northern degassing structure (TRDS, Tuscan Roman degassing structure) includes Tuscany, Latium and part of Umbria regions (~30000 km2) and releases > 6.1 Mt/y of deeply derived CO2. The southern degassing structure (CDS, Campanian degassing structure) affects the Campania region (~10000 km2) and releases > 3.1 Mt/y of deeply derived CO2. The total CO2 released by TRDS and CDS (> 9.2 Mt/y) is globally significant, being ~10% of the estimated present-day total CO2 discharge from sub aerial volcanoes of the Earth. The comparison between the map of CO2 Earth degassing and of the location of the Italian earthquakes highlights that the anomalous CO2 flux suddenly disappears in the Apennine in correspondence of a narrow band where most of the seismicity concentrates. A previous conceptual model proposed that in this area, at the eastern borders of TRDS and CDS plumes, the CO2 from the mantle wedge intrudes the crust and accumulate in structural traps generating over-pressurized reservoirs. These CO2 over-pressurized levels can play a major role in triggering the Apennine earthquakes, by reducing fault strength and potentially controlling the nucleation, arrest, and recurrence of both micro and major (M>5) earthquakes. The 2009 Abruzzo earthquakes, like previous seismic crises in the Northern Apennine, occurred at the border of the TRDS, suggesting also in this case a possible role played by deeply derived fluids in the earthquake generation. In order to investigate this process, detailed hydro-geochemical campaigns started immediately after the main shock of the 6th of April 2009. The surveys include the main springs of the area which were previously studied in

  6. Dual Megathrust Slip Behaviors of the 2014 Iquique Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Meng, L.; Huang, H.; Burgmann, R.; Ampuero, J. P.; Strader, A. E.

    2014-12-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A M 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of Northern Chile. This event was preceded by a 2-week-long foreshock sequence including a M 6.7 earthquake. Repeating earthquakes are found among the foreshock sequence that migrated towards the mainshock area, suggesting a large scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence time of repeating earthquakes highlights the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while other repeaters occurred both before and after the mainshock in the area complementary to the mainshock rupture. The spatial and temporal distribution of the repeating earthquakes illustrate the essential role of propagating aseismic slip in leading up to the mainshock and aftershock activities. Various finite fault models indicate that the coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show slow initiation with low amplitude moment rate at low frequency (< 0.1 Hz), while back-projection shows a steady initiation at high frequency (> 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the high-frequency rupture remains within an area of low gravity anomaly, suggesting possible upper-crustal structures that promote high-frequency generation. Back-projection also shows an episode of reverse rupture propagation which suggests a delayed failure of asperities in

  7. Frequency-Dependent Tidal Triggering of Low Frequency Earthquakes Near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Xue, L.; Burgmann, R.; Shelly, D. R.

    2017-12-01

    The effect of small periodic stress perturbations on earthquake generation is not clear, however, the rate of low-frequency earthquakes (LFEs) near Parkfield, California has been found to be strongly correlated with solid earth tides. Laboratory experiments and theoretical analyses show that the period of imposed forcing and source properties affect the sensitivity to triggering and the phase relation of the peak seismicity rate and the periodic stress, but frequency-dependent triggering has not been quantitatively explored in the field. Tidal forcing acts over a wide range of frequencies, therefore the sensitivity to tidal triggering of LFEs provides a good probe to the physical mechanisms affecting earthquake generation. In this study, we consider the tidal triggering of LFEs near Parkfield, California since 2001. We find the LFEs rate is correlated with tidal shear stress, normal stress rate and shear stress rate. The occurrence of LFEs can also be independently modulated by groups of tidal constituents at semi-diurnal, diurnal and fortnightly frequencies. The strength of the response of LFEs to the different tidal constituents varies between LFE families. Each LFE family has an optimal triggering frequency, which does not appear to be depth dependent or systematically related to other known properties. This suggests the period of the applied forcing plays an important role in the triggering process, and the interaction of periods of loading history and source region properties, such as friction, effective normal stress and pore fluid pressure, produces the observed frequency-dependent tidal triggering of LFEs.

  8. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so

  9. Global Review of Induced and Triggered Earthquakes

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Wilson, M.; Gluyas, J.; Julian, B. R.; Davies, R. J.

    2016-12-01

    Natural processes associated with very small incremental stress changes can modulate the spatial and temporal occurrence of earthquakes. These processes include tectonic stress changes, the migration of fluids in the crust, Earth tides, surface ice and snow loading, heavy rain, atmospheric pressure, sediment unloading and groundwater loss. It is thus unsurprising that large anthropogenic projects which may induce stress changes of a similar size also modulate seismicity. As human development accelerates and industrial projects become larger in scale and more numerous, the number of such cases is increasing. That mining and water-reservoir impoundment can induce earthquakes has been accepted for several decades. Now, concern is growing about earthquakes induced by activities such as hydraulic fracturing for shale-gas extraction and waste-water disposal via injection into boreholes. As hydrocarbon reservoirs enter their tertiary phases of production, seismicity may also increase there. The full extent of human activities thought to induce earthquakes is, however, much wider than generally appreciated. We have assembled as near complete a catalog as possible of cases of earthquakes postulated to have been induced by human activity. Our database contains a total of 705 cases and is probably the largest compilation made to date. We include all cases where reasonable arguments have been made for anthropogenic induction, even where these have been challenged in later publications. Our database presents the results of our search but leaves judgment about the merits of individual cases to the user. We divide anthropogenic earthquake-induction processes into: a) Surface operations, b) Extraction of mass from the subsurface, c) Introduction of mass into the subsurface, and d) Explosions. Each of these categories is divided into sub-categories. In some cases, categorization of a particular case is tentative because more than one anthropogenic activity may have preceded or been

  10. Real-time and rapid GNSS solutions from the M8.2 September 2017 Tehuantepec Earthquake and implications for Earthquake and Tsunami Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Hodgkinson, K. M.; Mattioli, G. S.

    2017-12-01

    In support of hazard research and Earthquake Early Warning (EEW) Systems UNAVCO operates approximately 800 RT-GNSS stations throughout western North America and Alaska (EarthScope Plate Boundary Observatory), Mexico (TLALOCNet), and the pan-Caribbean region (COCONet). Our system produces and distributes raw data (BINEX and RTCM3) and real-time Precise Point Positions via the Trimble PIVOT Platform (RTX). The 2017-09-08 earthquake M8.2 located 98 km SSW of Tres Picos, Mexico is the first great earthquake to occur within the UNAVCO RT-GNSS footprint, which allows for a rigorous analysis of our dynamic and static processing methods. The need for rapid geodetic solutions ranges from seconds (EEW systems) to several minutes (Tsunami Warning and NEIC moment tensor and finite fault models). Here, we compare and quantify the relative processing strategies for producing static offsets, moment tensors and geodetically determined finite fault models using data recorded during this event. We also compare the geodetic solutions with the USGS NEIC seismically derived moment tensors and finite fault models, including displacement waveforms generated from these models. We define kinematic post-processed solutions from GIPSY-OASISII (v6.4) with final orbits and clocks as a "best" case reference to evaluate the performance of our different processing strategies. We find that static displacements of a few centimeters or less are difficult to resolve in the real-time GNSS position estimates. The standard daily 24-hour solutions provide the highest-quality data-set to determine coseismic offsets, but these solutions are delayed by at least 48 hours after the event. Dynamic displacements, estimated in real-time, however, show reasonable agreement with final, post-processed position estimates, and while individual position estimates have large errors, the real-time solutions offer an excellent operational option for EEW systems, including the use of estimated peak-ground displacements or

  11. Long Period Earthquakes Beneath California's Young and Restless Volcanoes

    NASA Astrophysics Data System (ADS)

    Pitt, A. M.; Dawson, P. B.; Shelly, D. R.; Hill, D. P.; Mangan, M.

    2013-12-01

    The newly established USGS California Volcano Observatory has the broad responsibility of monitoring and assessing hazards at California's potentially threatening volcanoes, most notably Mount Shasta, Medicine Lake, Clear Lake Volcanic Field, and Lassen Volcanic Center in northern California; and Long Valley Caldera, Mammoth Mountain, and Mono-Inyo Craters in east-central California. Volcanic eruptions occur in California about as frequently as the largest San Andreas Fault Zone earthquakes-more than ten eruptions have occurred in the last 1,000 years, most recently at Lassen Peak (1666 C.E. and 1914-1917 C.E.) and Mono-Inyo Craters (c. 1700 C.E.). The Long Valley region (Long Valley caldera and Mammoth Mountain) underwent several episodes of heightened unrest over the last three decades, including intense swarms of volcano-tectonic (VT) earthquakes, rapid caldera uplift, and hazardous CO2 emissions. Both Medicine Lake and Lassen are subsiding at appreciable rates, and along with Clear Lake, Long Valley Caldera, and Mammoth Mountain, sporadically experience long period (LP) earthquakes related to migration of magmatic or hydrothermal fluids. Worldwide, the last two decades have shown the importance of tracking LP earthquakes beneath young volcanic systems, as they often provide indication of impending unrest or eruption. Herein we document the occurrence of LP earthquakes at several of California's young volcanoes, updating a previous study published in Pitt et al., 2002, SRL. All events were detected and located using data from stations within the Northern California Seismic Network (NCSN). Event detection was spatially and temporally uneven across the NCSN in the 1980s and 1990s, but additional stations, adoption of the Earthworm processing system, and heightened vigilance by seismologists have improved the catalog over the last decade. LP earthquakes are now relatively well-recorded under Lassen (~150 events since 2000), Clear Lake (~60 events), Mammoth Mountain

  12. The Alaska earthquake, March 27, 1964: lessons and conclusions

    USGS Publications Warehouse

    Eckel, Edwin B.

    1970-01-01

    One of the greatest earthquakes of all time struck south-central Alaska on March 27, 1964. Strong motion lasted longer than for most recorded earthquakes, and more land surface was dislocated, vertically and horizontally, than by any known previous temblor. Never before were so many effects on earth processes and on the works of man available for study by scientists and engineers over so great an area. The seismic vibrations, which directly or indirectly caused most of the damage, were but surface manifestations of a great geologic event-the dislocation of a huge segment of the crust along a deeply buried fault whose nature and even exact location are still subjects for speculation. Not only was the land surface tilted by the great tectonic event beneath it, with resultant seismic sea waves that traversed the entire Pacific, but an enormous mass of land and sea floor moved several tens of feet horizontally toward the Gulf of Alaska. Downslope mass movements of rock, earth, and snow were initiated. Subaqueous slides along lake shores and seacoasts, near-horizontal movements of mobilized soil (“landspreading”), and giant translatory slides in sensitive clay did the most damage and provided the most new knowledge as to the origin, mechanics, and possible means of control or avoidance of such movements. The slopes of most of the deltas that slid in 1964, and that produced destructive local waves, are still as steep or steeper than they were before the earthquake and hence would be unstable or metastable in the event of another great earthquake. Rockslide avalanches provided new evidence that such masses may travel on cushions of compressed air, but a widely held theory that glaciers surge after an earthquake has not been substantiated. Innumerable ground fissures, many of them marked by copious emissions of water, caused much damage in towns and along transportation routes. Vibration also consolidated loose granular materials. In some coastal areas, local

  13. Do weak global stresses synchronize earthquakes?

    NASA Astrophysics Data System (ADS)

    Bendick, R.; Bilham, R.

    2017-08-01

    Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.

  14. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  15. Dynamic modeling of stress evolution and crustal deformation associated with the seismogenic process of the 2008 Mw7.9 Wenchuan, China earthquake

    NASA Astrophysics Data System (ADS)

    Tao, W.; Wan, Y.; Wang, K.; Zeng, Y.; Shen, Z.

    2009-12-01

    We model stress evolution and crustal deformation associated with the seismogenic process of the 2008 Mw7.9 Wenchuan, China earthquake. This earthquake ruptured a section of the Longmen Shan fault, which is a listric fault separating the eastern Tibetan plateau at northwest from the Sichuan basin at southeast, with a predominantly thrust component for the southwest section of the fault. Different driving mechanisms have been proposed for the fault system: either by channel flow in the lower crust, or lateral push from the eastern Tibetan plateau on the entire crust. A 2-D finite element model is devised to simulate the tectonic process and test validities of the models. A layered viscoelastic media is prescribed, and constrained from seismological and other geophysical investigation results, characterized with a weak lower crust in the western Tibetan plateau and a strong lower crust in the Sichuan basin. The interseismic, coseismic, and postseismic deformation processes are modeled, under constraints of GPS observed deformation fields during these time periods. Our preliminary result shows concentration of elastic strain energy accumulated mainly surrounding the lower part of the locking section of the seismogenic fault during the interseismic time period, implying larger stress drop at the lower part than at the upper part of the locking section of the fault, assuming a total release of the elastic stress accumulation during an earthquake. The coseismic stress change is the largest at the near field in the hanging-wall, offering explanation of extensive aftershock activities occurred in the region after the Wenchuan mainshock. A more complete picture of stress evolution and interaction between the upper and lower crust in the process during an earthquake cycle will be presented at the meeting.

  16. How citizen seismology is transforming rapid public earthquake information: the example of LastQuake smartphone application and Twitter QuakeBot

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Etivant, C.; Roussel, F.; Mazet-Roux, G.; Steed, R.

    2014-12-01

    Smartphone applications have swiftly become one of the most popular tools for rapid reception of earthquake information for the public. Wherever someone's own location is, they can be automatically informed when an earthquake has struck just by setting a magnitude threshold and an area of interest. No need to browse the internet: the information reaches you automatically and instantaneously! One question remains: are the provided earthquake notifications always relevant for the public? A while after damaging earthquakes many eyewitnesses scrap the application they installed just after the mainshock. Why? Because either the magnitude threshold is set too high and many felt earthquakes are missed, or it is set too low and the majority of the notifications are related to unfelt earthquakes thereby only increasing anxiety among the population at each new update. Felt and damaging earthquakes are the ones of societal importance even when of small magnitude. LastQuake app and Twitter feed (QuakeBot) focuses on these earthquakes that matter for the public by collating different information threads covering tsunamigenic, damaging and felt earthquakes. Non-seismic detections and macroseismic questionnaires collected online are combined to identify felt earthquakes regardless their magnitude. Non seismic detections include Twitter earthquake detections, developed by the USGS, where the number of tweets containing the keyword "earthquake" is monitored in real time and flashsourcing, developed by the EMSC, which detect traffic surges on its rapid earthquake information website caused by the natural convergence of eyewitnesses who rush to the Internet to investigate the cause of the shaking that they have just felt. We will present the identification process of the felt earthquakes, the smartphone application and the 27 automatically generated tweets and how, by providing better public services, we collect more data from citizens.

  17. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    NASA Astrophysics Data System (ADS)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number

  18. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  19. Detailed observations of California foreshock sequences: Implications for the earthquake initiation process

    USGS Publications Warehouse

    Dodge, D.A.; Beroza, G.C.; Ellsworth, W.L.

    1996-01-01

    We find that foreshocks provide clear evidence for an extended nucleation process before some earthquakes. In this study, we examine in detail the evolution of six California foreshock sequences, the 1986 Mount Lewis (ML, = 5.5), the 1986 Chalfant (ML = 6.4), the. 1986 Stone Canyon (ML = 4.7), the 1990 Upland (ML = 5.2), the 1992 Joshua Tree (MW= 6.1), and the 1992 Landers (MW = 7.3) sequence. Typically, uncertainties in hypocentral parameters are too large to establish the geometry of foreshock sequences and hence to understand their evolution. However, the similarity of location and focal mechanisms for the events in these sequences leads to similar foreshock waveforms that we cross correlate to obtain extremely accurate relative locations. We use these results to identify small-scale fault zone structures that could influence nucleation and to determine the stress evolution leading up to the mainshock. In general, these foreshock sequences are not compatible with a cascading failure nucleation model in which the foreshocks all occur on a single fault plane and trigger the mainshock by static stress transfer. Instead, the foreshocks seem to concentrate near structural discontinuities in the fault and may themselves be a product of an aseismic nucleation process. Fault zone heterogeneity may also be important in controlling the number of foreshocks, i.e., the stronger the heterogeneity, the greater the number of foreshocks. The size of the nucleation region, as measured by the extent of the foreshock sequence, appears to scale with mainshock moment in the same manner as determined independently by measurements of the seismic nucleation phase. We also find evidence for slip localization as predicted by some models of earthquake nucleation. Copyright 1996 by the American Geophysical Union.

  20. The Rotational and Gravitational Effect of Earthquakes

    NASA Technical Reports Server (NTRS)

    Gross, Richard

    2000-01-01

    The static displacement field generated by an earthquake has the effect of rearranging the Earth's mass distribution and will consequently cause the Earth's rotation and gravitational field to change. Although the coseismic effect of earthquakes on the Earth's rotation and gravitational field have been modeled in the past, no unambiguous observations of this effect have yet been made. However, the Gravity Recovery And Climate Experiment (GRACE) satellite, which is scheduled to be launched in 2001, will measure time variations of the Earth's gravitational field to high degree and order with unprecedented accuracy. In this presentation, the modeled coseismic effect of earthquakes upon the Earth's gravitational field to degree and order 100 will be computed and compared to the expected accuracy of the GRACE measurements. In addition, the modeled second degree changes, corresponding to changes in the Earth's rotation, will be compared to length-of-day and polar motion excitation observations.

  1. Mexican Seismic Alert System's SAS-I algorithm review considering strong earthquakes felt in Mexico City since 1985

    NASA Astrophysics Data System (ADS)

    Cuellar Martinez, A.; Espinosa Aranda, J.; Suarez, G.; Ibarrola Alvarez, G.; Ramos Perez, S.; Camarillo Barranco, L.

    2013-05-01

    The Seismic Alert System of Mexico (SASMEX) uses three algorithms for alert activation that involve the distance between the seismic sensing field station (FS) and the city to be alerted; and the forecast for earthquake early warning activation in the cities integrated to the system, for example in Mexico City, the earthquakes occurred with the highest accelerations, were originated in the Pacific Ocean coast, whose distance this seismic region and the city, favors the use of algorithm called Algorithm SAS-I. This algorithm, without significant changes since its beginning in 1991, employs the data that generate one or more FS during P wave detection until S wave detection plus a period equal to the time employed to detect these phases; that is the double S-P time, called 2*(S-P). In this interval, the algorithm performs an integration process of quadratic samples from FS which uses a triaxial accelerometer to get two parameters: amplitude and growth rate measured until 2*(S-P) time. The parameters in SAS-I are used in a Magnitude classifier model, which was made from Guerrero Coast earthquakes time series, with reference to Mb magnitude mainly. This algorithm activates a Public or Preventive Alert if the model predicts whether Strong or Moderate earthquake. The SAS-I algorithm has been operating for over 23 years in the subduction zone of the Pacific Coast of Mexico, initially in Guerrero and followed by Oaxaca; and since March 2012 in the seismic region of Pacific covering the coasts among Jalisco, Colima, Michoacan, Guerrero and Oaxaca, where this algorithm has issued 16 Public Alert and 62 Preventive Alerts to the Mexico City where its soil conditions increase damages by earthquake such as the occurred in September 1985. This work shows the review of the SAS-I algorithm and possible alerts that it could generate from major earthquakes recordings detected by FS or seismometers near the earthquakes, coming from Pacific Ocean Coast whose have been felt in Mexico

  2. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  3. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence.

    PubMed

    Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe

    2017-09-01

    Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016-2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences.

  4. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence

    PubMed Central

    Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe

    2017-01-01

    Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016–2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences. PMID:28924610

  5. Towards coupled earthquake dynamic rupture and tsunami simulations: The 2011 Tohoku earthquake.

    NASA Astrophysics Data System (ADS)

    Galvez, Percy; van Dinther, Ylona

    2016-04-01

    The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given an unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds suggesting two rupture fronts, possibly due to slip reactivation caused by frictional melting and thermal fluid pressurization effects. We created a 3D dynamic rupture model to reproduce this rupture reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops (Galvez et al, 2015) . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The seismograms agree roughly with seismic records along the coast of Japan. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The simulated sea floor displacement reaches 8-10 meters of uplift close to the trench, which may be the cause of such a devastating tsunami followed by the Tohoku earthquake. To investigate the impact of such a huge uplift, we ran tsunami simulations with the slip reactivation model and plug the sea floor displacements into GeoClaw (Finite element code for tsunami simulations, George and LeVeque, 2006). Our recent results compare well with the water height at the tsunami DART buoys 21401, 21413, 21418 and 21419 and show the potential using fully dynamic rupture results for tsunami studies for earthquake-tsunami scenarios.

  6. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Landslides

    USGS Publications Warehouse

    Keefer, David K.

    1998-01-01

    Central California, in the vicinity of San Francisco and Monterey Bays, has a history of fatal and damaging landslides, triggered by heavy rainfall, coastal and stream erosion, construction activity, and earthquakes. The great 1906 San Francisco earthquake (MS=8.2-8.3) generated more than 10,000 landslides throughout an area of 32,000 km2; these landslides killed at least 11 people and caused substantial damage to buildings, roads, railroads, and other civil works. Smaller numbers of landslides, which caused more localized damage, have also been reported from at least 20 other earthquakes that have occurred in the San Francisco Bay-Monterey Bay region since 1838. Conditions that make this region particularly susceptible to landslides include steep and rugged topography, weak rock and soil materials, seasonally heavy rainfall, and active seismicity. Given these conditions and history, it was no surprise that the 1989 Loma Prieta earthquake generated thousands of landslides throughout the region. Landslides caused one fatality and damaged at least 200 residences, numerous roads, and many other structures. Direct damage from landslides probably exceeded $30 million; additional, indirect economic losses were caused by long-term landslide blockage of two major highways and by delays in rebuilding brought about by concern over the potential long-term instability of some earthquake-damaged slopes.

  7. Investigating landslides caused by earthquakes - A historical review

    USGS Publications Warehouse

    Keefer, D.K.

    2002-01-01

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides. This paper traces the historical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquake are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession of post-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing "retrospective" analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, synthesis of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  8. NASA Applied Sciences Disasters Program Support for the September 2017 Mexico Earthquakes

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Kirschbaum, D.; Torres-Perez, J. L.; Yun, S. H.; Owen, S. E.; Hua, H.; Fielding, E. J.; Liang, C.; Bekaert, D. P.; Osmanoglu, B.; Amini, R.; Green, D. S.; Murray, J. J.; Stough, T.; Struve, J. C.; Seepersad, J.; Thompson, V.

    2017-12-01

    The 8 September M 8.1 Tehuantepec and 19 September M 7.1 Puebla earthquakes were among the largest earthquakes recorded in Mexico. These two events caused widespread damage, affecting several million people and causing numerous casualties. A team of event coordinators in the NASA Applied Sciences Program activated soon after these devastating earthquakes in order to support decision makers in Mexico, using NASA modeling and international remote sensing capabilities to generate decision support products to aid in response and recovery. The NASA Disasters Program promotes the use of Earth observations to improve the prediction of, preparation for, response to, and recovery from natural and technological disasters. For these two events, the Disasters Program worked with Mexico's space agency (Agencia Espacial Mexico, AEM) and the National Center for Prevention of Disasters (Centro Nacional de Prevención de Desastres, CENAPRED) to generate products to support response, decision-making, and recovery. Products were also provided to academic partners, technical institutions, and field responders to support response. In addition, the Program partnered with the US Geological Survey (USGS), Office of Foreign Disaster Assistance (OFDA), and other partners in order to provide information to federal and domestic agencies that were supporting event response. Leveraging the expertise of investigators at NASA Centers, products such as landslide susceptibility maps, precipitation models, and radar based damage assessments and surface deformation maps were generated and used by AEM, CENAPRED, and others during the event. These were used by AEM in collaboration with other government agencies in Mexico to make appropriate decisions for mapping damage, rescue and recovery, and informing the population regarding areas prone to potential risk. We will provide an overview of the response activities and data products generated in support of the earthquake response, partnerships with

  9. Glacial Earthquakes: Monitoring Greenland's Glaciers Using Broadband Seismic Data

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2017-12-01

    The Greenland ice sheet currently loses 400 Gt of ice per year, and up to half of that mass loss comes from icebergs calving from marine-terminating glaciers (Enderlin et al., 2014). Some of the largest icebergs produced by Greenland's glaciers generate magnitude 5 seismic signals when they calve. These glacial earthquakes are recorded by seismic stations around the world. Full-waveform inversion and analysis of glacial earthquakes provides a low-cost tool to identify where and when gigaton-sized icebergs calve, and to track this important mass-loss mechanism in near-real-time. Fifteen glaciers in Greenland are known to have produced glacial earthquakes, and the annual number of these events has increased by a factor of six over the past two decades (e.g., Ekström et al., 2006; Olsen and Nettles, 2017). Since 2000, the number of glacial earthquakes on Greenland's west coast has increased dramatically. Our analysis of three recent years of data shows that more glacial earthquakes occurred on Greenland's west coast from 2011 - 2013 than ever before. In some cases, glacial-earthquake force orientations allow us to identify which section of a glacier terminus produced the iceberg associated with a particular event. We are able to track the timing of major changes in calving-front orientation at several glaciers around Greenland, as well as progressive failure along a single calving front over the course of hours to days. Additionally, the presence of glacial earthquakes resolves a glacier's grounded state, as glacial earthquakes occur only when a glacier terminates close to its grounding line.

  10. Universal Recurrence Time Statistics of Characteristic Earthquakes

    NASA Astrophysics Data System (ADS)

    Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.

    2006-12-01

    Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.

  11. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  12. Earthquake Parameters Inferred from the Hoping River Pseudotachylyte, Taiwan

    NASA Astrophysics Data System (ADS)

    Korren, C.; Ferre, E. C.; Yeh, E. C.; Chou, Y. M.

    2014-12-01

    Taiwan, one of the most seismically active areas in the world, repeatedly experiences violent earthquakes, such as the 1999 Mw 7.6 Chi-Chi earthquake, in highly populated areas. The main island of Taiwan lies in the convergent tectonic region between the Eurasian Plate and Philippine Sea Plate. Fault pseudotachylytes form by frictional melting along the fault plane during large seismic slip events and therefore constitute earthquake fossils. The width of a pseudotachylyte generation vein is a crude proxy for earthquake magnitude. The attitude of oblique injection veins primarily reflects slip kinematics. Additional constraints on the seismic slip direction and slip sense can be obtained 1) from the principal axes of the magnetic fabric of generation veins and 2) from 3D tomographic analysis of vein geometry. A new pseudotachylyte locality discovered along the Hoping River offers an unparalleled opportunity to learn more about the Plio-Pleistocene paleoseismology and seismic kinematics of northeastern Taiwan. Field work measured the orientations and relations of structural features yields a complex geometry of generation and injection veins. Pseudotachylytes were sampled for tomographic, magnetic fabric and scanning electron microscope analyses. An oriented block of pseudotachylyte was sliced then stitched into a 3-D tomographic model using the Image-J software image stack plug-in. Tomographic analysis shows pseudotachylyte veins originate from a single slip event at sample size. An average vein thickness ranges from 1 mm proximal to areas with abundant injection veins to 2 mm. The displacement calculated after Sibson's 1975 method, displacement equals the square of vein thickness multiplied by 436 yields a range from 4.36 cm to 17.44 cm. The pseudotachylytes displacement typifies earthquakes less than magnitude 5. However, this crude estimate of displacement requires further discussion. Comparison of the calculated displacements by different methodology may further

  13. Chapter B. The Loma Prieta, California, Earthquake of October 17, 1989 - Forecasts

    USGS Publications Warehouse

    Harris, Ruth A.

    1998-01-01

    The magnitude (Mw) 6.9 Loma Prieta earthquake struck the San Francisco Bay region of central California at 5:04 p.m. P.d.t. on October 17, 1989, killing 62 people and generating billions of dollars in property damage. Scientists were not surprised by the occurrence of a destructive earthquake in this region and had, in fact, been attempting to forecast the location of the next large earthquake in the San Francisco Bay region for decades. This paper summarizes more than 20 scientifically based forecasts made before the 1989 Loma Prieta earthquake for a large earthquake that might occur in the Loma Prieta area. The forecasts geographically closest to the actual earthquake primarily consisted of right-lateral strike-slip motion on the San Andreas Fault northwest of San Juan Bautista. Several of the forecasts did encompass the magnitude of the actual earthquake, and at least one approximately encompassed the along-strike rupture length. The 1989 Loma Prieta earthquake differed from most of the forecasted events in two ways: (1) it occurred with considerable dip-slip in addition to strike-slip motion, and (2) it was much deeper than expected.

  14. Earthquake hazard and risk assessment based on Unified Scaling Law for Earthquakes: Greater Caucasus and Crimea

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.; Nekrasova, Anastasia K.

    2018-05-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10 N(M, L) = A + B·(5 - M) + C·log10 L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within a seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g., peak ground acceleration, PGA, or macro-seismic intensity). After a rigorous verification against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory). The methodology of seismic hazard and risk assessment is illustrated by application to the territory of Greater Caucasus and Crimea.

  15. Machine Learning Seismic Wave Discrimination: Application to Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Li, Zefeng; Meier, Men-Andrin; Hauksson, Egill; Zhan, Zhongwen; Andrews, Jennifer

    2018-05-01

    Performance of earthquake early warning systems suffers from false alerts caused by local impulsive noise from natural or anthropogenic sources. To mitigate this problem, we train a generative adversarial network (GAN) to learn the characteristics of first-arrival earthquake P waves, using 300,000 waveforms recorded in southern California and Japan. We apply the GAN critic as an automatic feature extractor and train a Random Forest classifier with about 700,000 earthquake and noise waveforms. We show that the discriminator can recognize 99.2% of the earthquake P waves and 98.4% of the noise signals. This state-of-the-art performance is expected to reduce significantly the number of false triggers from local impulsive noise. Our study demonstrates that GANs can discover a compact and effective representation of seismic waves, which has the potential for wide applications in seismology.

  16. Earthquakes as collapse precursors at the Han-sur-Lesse Cave in the Belgian Ardennes

    NASA Astrophysics Data System (ADS)

    Camelbeeck, Thierry; Quinif, Yves; Verheyden, Sophie; Vanneste, Kris; Knuts, Elisabeth

    2018-05-01

    Collapse activation is an ongoing process in the evolution of karstic networks related to the weakening of cave vaults. Because collapses are infrequent, few have been directly observed, making it challenging to evaluate the role of external processes in their initiation and triggering. Here, we study the two most recent collapses in the Dôme chamber of the Han-sur-Lesse Cave (Belgian Ardenne) that occurred on or shortly after 3rd December 1828 and between the 13th and 14th of March 1984. Because of the low probability that the two earthquakes that generated the strongest ground motions in Han-sur-Lesse since 1800, on 23rd February 1828 (Mw = 5.1 in Central Belgium) and 8th November 1983 (Mw = 4.8 in Liège) occurred by coincidence less than one year before these collapses, we suggest that the collapses are related to these earthquakes. We argue that the earthquakes accelerated the cave vault instability, leading to the collapses by the action of other factors weakening the host rock. In particular, the 1828 collapse was likely triggered by a smaller Mw = 4.2 nearby earthquake. The 1984 collapse followed two months of heavy rainfall that would have increased water infiltration and pressure in the rock mass favoring destabilization of the cave ceiling. Lamina counting of a stalagmite growing on the 1828 debris dates the collapse at 1826 ± 9 CE, demonstrating the possibility of dating previous collapses with a few years of uncertainty. Furthermore, our study opens new perspectives for studying collapses and their chronology both in the Han-sur-Lesse Cave and in other karstic networks. We suggest that earthquake activity could play a stronger role than previously thought in initiating cave collapses.

  17. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  18. Mexican Earthquakes and Tsunamis Catalog Reviewed

    NASA Astrophysics Data System (ADS)

    Ramirez-Herrera, M. T.; Castillo-Aja, R.

    2015-12-01

    Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and

  19. iOS and OS X Apps for Exploring Earthquake Activity

    NASA Astrophysics Data System (ADS)

    Ammon, C. J.

    2015-12-01

    The U.S. Geological Survey and many other agencies rapidly provide information following earthquakes. This timely information garners great public interest and provides a rich opportunity to engage students in discussion and analysis of earthquakes and tectonics. In this presentation I will describe a suite of iOS and Mac OS X apps that I use for teaching and that Penn State employs in outreach efforts in a small museum run by the College of Earth and Mineral Sciences. The iOS apps include a simple, global overview of earthquake activity, epicentral, designed for a quick review or event lookup. A more full-featured iPad app, epicentral-plus, includes a simple global overview along with views that allow a more detailed exploration of geographic regions of interest. In addition, epicentral-plus allows the user to monitor ground motions using seismic channel lists compatible with the IRIS web services. Some limited seismogram processing features are included to allow focus on appropriate signal bandwidths. A companion web site, which includes background material on earthquakes, and a blog that includes sample images and channel lists appropriate for monitoring earthquakes in regions of recent earthquake activity can be accessed through the a third panel in the app. I use epicentral-plus at the beginning of each earthquake seismology class to review recent earthquake activity and to stimulate students to formulate and to ask questions that lead to discussions of earthquake and tectonic processes. Less interactive OS X versions of the apps are used to display a global map of earthquake activity and seismograms in near real time in a small museum on the ground floor of the building hosting Penn State's Geoscience Department.

  20. Data mining of atmospheric parameters associated with coastal earthquakes

    NASA Astrophysics Data System (ADS)

    Cervone, Guido

    Earthquakes are natural hazards that pose a serious threat to society and the environment. A single earthquake can claim thousands of lives, cause damages for billions of dollars, destroy natural landmarks and render large territories uninhabitable. Studying earthquakes and the processes that govern their occurrence, is of fundamental importance to protect lives, properties and the environment. Recent studies have shown that anomalous changes in land, ocean and atmospheric parameters occur prior to earthquakes. The present dissertation introduces an innovative methodology and its implementation to identify anomalous changes in atmospheric parameters associated with large coastal earthquakes. Possible geophysical mechanisms are discussed in view of the close interaction between the lithosphere, the hydrosphere and the atmosphere. The proposed methodology is a multi strategy data mining approach which combines wavelet transformations, evolutionary algorithms, and statistical analysis of atmospheric data to analyze possible precursory signals. One dimensional wavelet transformations and statistical tests are employed to identify significant singularities in the data, which may correspond to anomalous peaks due to the earthquake preparatory processes. Evolutionary algorithms and other localized search strategies are used to analyze the spatial and temporal continuity of the anomalies detected over a large area (about 2000 km2), to discriminate signals that are most likely associated with earthquakes from those due to other, mostly atmospheric, phenomena. Only statistically significant singularities occurring within a very short time of each other, and which tract a rigorous geometrical path related to the geological properties of the epicentral area, are considered to be associated with a seismic event. A program called CQuake was developed to implement and validate the proposed methodology. CQuake is a fully automated, real time semi-operational system, developed to

  1. Crustal earthquake triggering by pre-historic great earthquakes on subduction zone thrusts

    USGS Publications Warehouse

    Sherrod, Brian; Gomberg, Joan

    2014-01-01

    Triggering of earthquakes on upper plate faults during and shortly after recent great (M>8.0) subduction thrust earthquakes raises concerns about earthquake triggering following Cascadia subduction zone earthquakes. Of particular regard to Cascadia was the previously noted, but only qualitatively identified, clustering of M>~6.5 crustal earthquakes in the Puget Sound region between about 1200–900 cal yr B.P. and the possibility that this was triggered by a great Cascadia thrust subduction thrust earthquake, and therefore portends future such clusters. We confirm quantitatively the extraordinary nature of the Puget Sound region crustal earthquake clustering between 1200–900 cal yr B.P., at least over the last 16,000. We conclude that this cluster was not triggered by the penultimate, and possibly full-margin, great Cascadia subduction thrust earthquake. However, we also show that the paleoseismic record for Cascadia is consistent with conclusions of our companion study of the global modern record outside Cascadia, that M>8.6 subduction thrust events have a high probability of triggering at least one or more M>~6.5 crustal earthquakes.

  2. Dynamic strains for earthquake source characterization

    USGS Publications Warehouse

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  3. Heart attacks and the Newcastle earthquake.

    PubMed

    Dobson, A J; Alexander, H M; Malcolm, J A; Steele, P L; Miles, T A

    To test the hypothesis that stress generated by the Newcastle earthquake led to increased risk of heart attack and coronary death. A natural experiment. People living in the Newcastle and Lake Macquarie local government areas of New South Wales, Australia. At 10.27 a.m. on 28 December 1989 Newcastle was struck by an earthquake measuring 5.6 on the Richter scale. Myocardial infarction and coronary death defined by the criteria of the WHO MONICA Project and hospital admissions for coronary disease before and after the earthquake and in corresponding periods in previous years. Well established, concurrent data collection systems were used. There were six fatal myocardial infarctions and coronary deaths among people aged under 70 years after the earthquake in the period 28-31 December 1989. Compared with the average number of deaths at this time of year this was unusually high (P = 0.016). Relative risks for this four-day period were: fatal myocardial infarction and coronary death, 1.67 (95% confidence interval [Cl]: 0.72, 3.17); non-fatal definite myocardial infarction, 1.05 (95% Cl: 0.05, 2.22); non-fatal possible myocardial infarction, 1.34 (95% Cl: 0.67, 1.91); hospital admissions for myocardial infarction or other ischaemic heart disease, 1.27 (95% Cl: 0.83, 1.66). There was no evidence of increased risk during the following four months. The magnitude of increased risk of death was slightly less than that previously reported after earthquakes in Greece. The data provide weak evidence that acute emotional and physical stress may trigger myocardial infarction and coronary death.

  4. Operational Earthquake Forecasting and Earthquake Early Warning: The Challenges of Introducing Scientific Innovations for Public Safety

    NASA Astrophysics Data System (ADS)

    Goltz, J. D.

    2016-12-01

    Although variants of both earthquake early warning and short-term operational earthquake forecasting systems have been implemented or are now being implemented in some regions and nations, they have been slow to gain acceptance within the disciplines that produced them as well as among those for whom they were intended to assist. To accelerate the development and implementation of these technologies will require the cooperation and collaboration of multiple disciplines, some inside and others outside of academia. Seismologists, social scientists, emergency managers, elected officials and key opinion leaders from the media and public must be the participants in this process. Representatives of these groups come from both inside and outside of academia and represent very different organizational cultures, backgrounds and expectations for these systems, sometimes leading to serious disagreements and impediments to further development and implementation. This presentation will focus on examples of the emergence of earthquake early warning and operational earthquake forecasting systems in California, Japan and other regions and document the challenges confronted in the ongoing effort to improve seismic safety.

  5. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  6. Broadband Analysis of the Energetics of Earthquakes and Tsunamis in the Sunda Forearc from 1987-2012

    NASA Astrophysics Data System (ADS)

    Choy, G. L.; Kirby, S. H.; Hayes, G. P.

    2013-12-01

    In the eighteen years before the 2004 Sumatra Mw 9.1 earthquake, the forearc off Sumatra experienced only one large (Mw > 7.0) thrust event and experienced no earthquakes that generated measurable tsunami wave heights. In the subsequent eight years, twelve large thrust earthquakes occurred of which half generated measurable tsunamis. The number of broadband earthquakes (those events with Mw > 5.5 for which broadband teleseismic waveforms have sufficient signal to compute depths, focal mechanisms, moments and radiated energies) jumped six fold after 2004. The progression of tsunami earthquakes, as well as the profuse increase in broadband activity, strongly suggests regional stress adjustments following the Sumatra 2004 megathrust earthquake. Broadband source parameters, published routinely in the Source Parameters (SOPAR) database of the USGS's NEIC (National Earthquake Information Center), have provided the most accurate depths and locations of big earthquakes since the implementation of modern digital seismographic networks. Moreover, radiated energy and seismic moment (also found in SOPAR) are related to apparent stress which is a measure of fault maturity. In mapping apparent stress as a function of depth and focal mechanism, we find that about 12% of broadband thrust earthquakes in the subduction zone are unequivocally above or below the slab interface. Apparent stresses of upper-plate events are associated with failure on mature splay faults, some of which generated measurable tsunamis. One unconventional source for local wave heights was a large intraslab earthquake. High-energy upper-plate events, which are dominant in the Aceh Basin, are associated with immature faults, which may explain why the region was bypassed by significant rupture during the 2004 Sumatra earthquake. The majority of broadband earthquakes are non-randomly concentrated under the outer-arc high. They appear to delineate the periphery of the contiguous rupture zones of large earthquakes

  7. New ideas about the physics of earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Klein, William

    1995-07-01

    It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.

  8. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: Altai-Sayan Region

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Nekrasova, A.

    2017-12-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes, USLE, which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10N(M, L) = A + B·(5 - M) + C·log10L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum credible magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g. peak ground acceleration, PGA, or macro-seismic intensity etc.). After a rigorous testing against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory, etc.). This, USLE based, methodology of seismic hazard and risks assessment is applied to the territory of Altai-Sayan Region, of Russia. The study supported by the Russian Science Foundation Grant No. 15-17-30020.

  9. The Extended Concept Of Symmetropy And Its Application To Earthquakes And Acoustic Emissions

    NASA Astrophysics Data System (ADS)

    Nanjo, K.; Yodogawa, E.

    2003-12-01

    There is the notion of symmetropy that can be considered as a powerful tool to measure quantitatively entropic heterogeneity regarding symmetry of a pattern. It can be regarded as a quantitative measure to extract the feature of asymmetry of a pattern (Yodogawa, 1982; Nanjo et al., 2000, 2001, 2002 in press). In previous studies, symmetropy was estimated for the spatial distributions of acoustic emissions generated before the ultimate whole fracture of a rock specimen in the laboratory experiment and for the spatial distributions of earthquakes in the seismic source model with self-organized criticality (SOC). In each of these estimations, the outline of the region in which symmetropy is estimated for a pattern is determined to be equal to that of the rock specimen in which acoustic emissions are generated or that of the SOC seismic source model from which earthquakes emerge. When local seismicities like aftershocks, foreshocks and earthquake swarms in the Earth's crust are considered, it is difficult to determine objectively the outline of the region characterizing these local seismicities without the need of subjectiveness. So, the original concept of symmetropy is not appropriate to be directly applied to such local seismicities and the proper modification of the original one is needed. Here, we introduce the notion of symmetropy for the nonlinear geosciences and extend it for the purpose of the application to local seismicities such as aftershocks, foreshocks and earthquake swarms. We employ the extended concept to the spatial distributions of acoustic emissions generated in a previous laboratory experiment where the failure process in a brittle granite sample can be stabilized by controlling axial stress to maintain a constant rate of acoustic emissions and, as a result, detailed view of fracture nucleation and growth was observed. Moreover, it is applied to the temporal variations of spatial distributions of aftershocks and foreshocks of the main shocks

  10. Spatial and size distributions of garnets grown in a pseudotachylyte generated during a lower crust earthquake

    NASA Astrophysics Data System (ADS)

    Clerc, Adriane; Renard, François; Austrheim, Håkon; Jamtveit, Bjørn

    2018-05-01

    In the Bergen Arc, western Norway, rocks exhumed from the lower crust record earthquakes that formed during the Caledonian collision. These earthquakes occurred at about 30-50 km depth under granulite or amphibolite facies metamorphic conditions. Coseismic frictional heating produced pseudotachylytes in this area. We describe pseudotachylytes using field data to infer earthquake magnitude (M ≥ 6.6), low dynamic friction during rupture propagation (μd < 0.1) and laboratory analyses to infer fast crystallization of microlites in the pseudotachylyte, within seconds of the earthquake arrest. High resolution 3D X-ray microtomography imaging reveals the microstructure of a pseudotachylyte sample, including numerous garnets and their corona of plagioclase that we infer have crystallized in the pseudotachylyte. These garnets 1) have dendritic shapes and are surrounded by plagioclase coronae almost fully depleted in iron, 2) have a log-normal volume distribution, 3) increase in volume with increasing distance away from the pseudotachylyte-host rock boundary, and 4) decrease in number with increasing distance away from the pseudotachylyte -host rock boundary. These characteristics indicate fast mineral growth, likely within seconds. We propose that these new quantitative criteria may assist in the unambiguous identification of pseudotachylytes in the field.

  11. Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.

    2012-12-01

    It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or

  12. Impact of earthquakes on sex ratio at birth: Eastern Marmara earthquakes

    PubMed Central

    Doğer, Emek; Çakıroğlu, Yiğit; Köpük, Şule Yıldırım; Ceylan, Yasin; Şimşek, Hayal Uzelli; Çalışkan, Eray

    2013-01-01

    Objective: Previous reports suggest that maternal exposure to acute stress related to earthquakes affects the sex ratio at birth. Our aim was to examine the change in sex ratio at birth after Eastern Marmara earthquake disasters. Material and Methods: This study was performed using the official birth statistics from January 1997 to December 2002 – before and after 17 August 1999, the date of the Golcuk Earthquake – supplied from the Turkey Statistics Institute. The secondary sex ratio was expressed as the male proportion at birth, and the ratio of both affected and unaffected areas were calculated and compared on a monthly basis using data from gender with using the Chi-square test. Results: We observed significant decreases in the secondary sex ratio in the 4th and 8th months following an earthquake in the affected region compared to the unaffected region (p= 0.001 and p= 0.024). In the earthquake region, the decrease observed in the secondary sex ratio during the 8th month after an earthquake was specific to the period after the earthquake. Conclusion: Our study indicated a significant reduction in the secondary sex ratio after an earthquake. With these findings, events that cause sudden intense stress such as earthquakes can have an effect on the sex ratio at birth. PMID:24592082

  13. Analysis of post-earthquake landslide activity and geo-environmental effects

    NASA Astrophysics Data System (ADS)

    Tang, Chenxiao; van Westen, Cees; Jetten, Victor

    2014-05-01

    Large earthquakes can cause huge losses to human society, due to ground shaking, fault rupture and due to the high density of co-seismic landslides that can be triggered in mountainous areas. In areas that have been affected by such large earthquakes, the threat of landslides continues also after the earthquake, as the co-seismic landslides may be reactivated by high intensity rainfall events. Earthquakes create Huge amount of landslide materials remain on the slopes, leading to a high frequency of landslides and debris flows after earthquakes which threaten lives and create great difficulties in post-seismic reconstruction in the earthquake-hit regions. Without critical information such as the frequency and magnitude of landslides after a major earthquake, reconstruction planning and hazard mitigation works appear to be difficult. The area hit by Mw 7.9 Wenchuan earthquake in 2008, Sichuan province, China, shows some typical examples of bad reconstruction planning due to lack of information: huge debris flows destroyed several re-constructed settlements. This research aim to analyze the decay in post-seismic landslide activity in areas that have been hit by a major earthquake. The areas hit by the 2008 Wenchuan earthquake will be taken a study area. The study will analyze the factors that control post-earthquake landslide activity through the quantification of the landslide volume changes well as through numerical simulation of their initiation process, to obtain a better understanding of the potential threat of post-earthquake landslide as a basis for mitigation planning. The research will make use of high-resolution stereo satellite images, UAV and Terrestrial Laser Scanning(TLS) to obtain multi-temporal DEM to monitor the change of loose sediments and post-seismic landslide activities. A debris flow initiation model that incorporates the volume of source materials, vegetation re-growth, and intensity-duration of the triggering precipitation, and that evaluates

  14. Validation of Atmosphere/Ionosphere Signals Associated with Major Earthquakes by Multi-Instrument Space-Borne and Ground Observations

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Parrot, Michel; Liu, J. Y.; Yang, T. F.; Arellano-Baeza, Alonso; Kafatos, M.; Taylor, Patrick

    2012-01-01

    The latest catastrophic earthquake in Japan (March 2011) has renewed interest in the important question of the existence of pre-earthquake anomalous signals related to strong earthquakes. Recent studies have shown that there were precursory atmospheric/ionospheric signals observed in space associated with major earthquakes. The critical question, still widely debated in the scientific community, is whether such ionospheric/atmospheric signals systematically precede large earthquakes. To address this problem we have started to investigate anomalous ionospheric / atmospheric signals occurring prior to large earthquakes. We are studying the Earth's atmospheric electromagnetic environment by developing a multisensor model for monitoring the signals related to active tectonic faulting and earthquake processes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (thermal infrared radiation, electron concentration in the ionosphere, lineament analysis, radon/ion activities, air temperature and seismicity) that were found to be associated with earthquakes. A physical link between these parameters and earthquake processes has been provided by the recent version of Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model. Our experimental measurements have supported the new theoretical estimates of LAIC hypothesis for an increase in the surface latent heat flux, integrated variability of outgoing long wave radiation (OLR) and anomalous variations of the total electron content (TEC) registered over the epicenters. Some of the major earthquakes are accompanied by an intensification of gas migration to the surface, thermodynamic and hydrodynamic processes of transformation of latent heat into thermal energy and with vertical transport of charged aerosols in the lower atmosphere. These processes lead to the generation of external electric currents in specific

  15. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  16. Source time function properties indicate a strain drop independent of earthquake depth and magnitude.

    PubMed

    Vallée, Martin

    2013-01-01

    The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes.

  17. An Efficient Rapid Warning System For Earthquakes In The European-mediterranean Region

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet-Roux, G.; di Giovambattista, R.; Tome, M.

    Every year a few damaging earthquakes occur in the European-Mediterranean region. It is therefore indispensable to operate a real-time warning system in order to pro- vide rapidly reliable estimates of the location, depth and magnitude of these seismic events. In order to provide this information in a timely manner both to the scientific community and to the European and national authorities dealing with natural hazards and relief organisation, the European-Mediterranean Seismological Centre (EMSC) has federated a network of seismic networks exchanging their data in quasi real-time. Today, thanks to the Internet, the EMSC receives real-time information about earth- quakes from about thirty seismological institutes. As soon as data reach the EMSC, they are displayed on the EMSC Web pages (www.emsc-csem.org). A seismic alert is generated for any potentially damaging earthquake in the European-Mediterranean re- gion, potentially damaging earthquakes being defined as seismic events of magnitude 5 or more. The warning system automatically issues a message to the duty seismolo- gist mobile phone and pager. The seismologist log in to the EMSC computers using a laptop PC and relocates the earthquake by processing together all information pro- vided by the networks. The new location and magnitude are then send, by fax, telex, and email, within one hour following the earthquake occurrence, to national and inter- national organisations whose activities are related to seismic risks, and to the EMSC members. The EMSC rapid warning system has been fully operational for more than 4 years. Its distributed architecture has proved to be an efficient and reliable way for the monitoring of potentially damaging earthquakes. Furthermore, if a major problem disrupts the operational system more than 30 minutes, the duty is taken, over either by the Instituto Geografico National in Spain or by the Istituto Nazionale di Geofisica in Italy. The EMSC operational centre, located at the

  18. Road Damage Following Earthquake

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Ground shaking triggered liquefaction in a subsurface layer of water-saturated sand, producing differential lateral and vertical movement in a overlying carapace of unliquified sand and slit, which moved from right to left towards the Pajaro River. This mode of ground failure, termed lateral spreading, is a principal cause of liquefaction-related earthquake damage caused by the Oct. 17, 1989, Loma Prieta earthquake. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: S.D. Ellen, U.S. Geological Survey

  19. The Geodetic Signature of the Earthquake Cycle at Subduction Zones: Model Constraints on the Deep Processes

    NASA Astrophysics Data System (ADS)

    Govers, R.; Furlong, K. P.; van de Wiel, L.; Herman, M. W.; Broerse, T.

    2018-03-01

    Recent megathrust events in Tohoku (Japan), Maule (Chile), and Sumatra (Indonesia) were well recorded. Much has been learned about the dominant physical processes in megathrust zones: (partial) locking of the plate interface, detailed coseismic slip, relocking, afterslip, viscoelastic mantle relaxation, and interseismic loading. These and older observations show complex spatial and temporal patterns in crustal deformation and displacement, and significant differences among different margins. A key question is whether these differences reflect variations in the underlying processes, like differences in locking, or the margin geometry, or whether they are a consequence of the stage in the earthquake cycle of the margin. Quantitative models can connect these plate boundary processes to surficial and far-field observations. We use relatively simple, cyclic geodynamic models to isolate the first-order geodetic signature of the megathrust cycle. Coseismic and subsequent slip on the subduction interface is dynamically (and consistently) driven. A review of global preseismic, coseismic, and postseismic geodetic observations, and of their fit to the model predictions, indicates that similar physical processes are active at different margins. Most of the observed variability between the individual margins appears to be controlled by their different stages in the earthquake cycle. The modeling results also provide a possible explanation for observations of tensile faulting aftershocks and tensile cracking of the overriding plate, which are puzzling in the context of convergence/compression. From the inversion of our synthetic GNSS velocities we find that geodetic observations may incorrectly suggest weak locking of some margins, for example, the west Aleutian margin.

  20. The plan to coordinate NEHRP post-earthquake investigations

    USGS Publications Warehouse

    Holzer, Thomas L.; Borcherdt, Roger D.; Comartin, Craig D.; Hanson, Robert D.; Scawthorn, Charles R.; Tierney, Kathleen; Youd, T. Leslie

    2003-01-01

    This is the plan to coordinate domestic and foreign post-earthquake investigations supported by the National Earthquake Hazards Reduction Program (NEHRP). The plan addresses coordination of both the NEHRP agencies—Federal Emergency Management Agency (FEMA), National Institute of Standards and Technology (NIST), National Science Foundation (NSF), and U. S. Geological Survey (USGS)—and their partners. The plan is a framework for both coordinating what is going to be done and identifying responsibilities for post-earthquake investigations. It does not specify what will be done. Coordination is addressed in various time frames ranging from hours to years after an earthquake. The plan includes measures for (1) gaining rapid and general agreement on high-priority research opportunities, and (2) conducting the data gathering and fi eld studies in a coordinated manner. It deals with identifi cation, collection, processing, documentation, archiving, and dissemination of the results of post-earthquake work in a timely manner and easily accessible format.

  1. Intermediate-depth earthquakes facilitated by eclogitization-related stresses

    USGS Publications Warehouse

    Nakajima, Junichi; Uchida, Naoki; Shiina, Takahiro; Hasegawa, Akira; Hacker, Bradley R.; Kirby, Stephen H.

    2013-01-01

    Eclogitization of the basaltic and gabbroic layer in the oceanic crust involves a volume reduction of 10%–15%. One consequence of the negative volume change is the formation of a paired stress field as a result of strain compatibility across the reaction front. Here we use waveform analysis of a tiny seismic cluster in the lower crust of the downgoing Pacific plate and reveal new evidence in favor of this mechanism: tensional earthquakes lying 1 km above compressional earthquakes, and earthquakes with highly similar waveforms lying on well-defined planes with complementary rupture areas. The tensional stress is interpreted to be caused by the dimensional mismatch between crust transformed to eclogite and underlying untransformed crust, and the earthquakes are probably facilitated by reactivation of fossil faults extant in the subducting plate. These observations provide seismic evidence for the role of volume change–related stresses and, possibly, fluid-related embrittlement as viable processes for nucleating earthquakes in downgoing oceanic lithosphere.

  2. Investigating Landslides Caused by Earthquakes A Historical Review

    NASA Astrophysics Data System (ADS)

    Keefer, David K.

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated withearthquake-induced landslides. This paper traces thehistorical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquakes are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession ofpost-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing ``retrospective'' analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, syntheses of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  3. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    USGS Publications Warehouse

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  4. Fault Lubrication and Earthquake Propagation in Thermally Unstable Rocks

    NASA Astrophysics Data System (ADS)

    de Paola, Nicola; Hirose, Takehiro; Mitchell, Tom; di Toro, Giulio; Viti, Cecilia; Shimamoto, Toshiko

    2010-05-01

    During earthquake propagation in thermally unstable rocks, the frictional heat generated can induce thermal reactions which lead to chemical and physical changes in the slip zone. We performed laboratory friction experiments on thermally unstable minerals (gypsum, dolomite and calcite) at about 1 m/s slip velocities, more than 1 m displacements and calculated temperature rise above 500 C degrees. These conditions are typical during the propagation of large earthquakes. The main findings of our experimental work are: 1) Dramatic fault weakening is characterized by a dynamic frictional strength drop up to 90% of the initial static value in the Byerlee's range. 2) Seismic source parameters, calculated from our experimental results, match those obtained by modelling of seismological data from the 1997 Cofliorito earthquake nucleated in carbonate rocks in Italy (i.e. same rocks used in the friction experiments). Fault lubrication observed during the experiments is controlled by the superposition of multiple, thermally-activated, slip weakening mechanisms (e.g., flash heating, thermal pressurization and nanoparticle lubrication). The integration of mechanical and CO2 emission data, temperature rise calculations and XRPD analyses suggests that flash heating is not the main dynamic slip weakening process. This process was likely inhibited very soon (t < 1s) for displacements d < 0.20 m, when intense grain size reduction by both cataclastic and chemical/thermal processes took place. Conversely, most of the dynamic weakening observed was controlled by thermal pressurization and nanoparticle lubrication processes. The dynamic shear strength of experimental faults was reduced when fluids (CO2, H2O) were trapped and pressurized within the slip zone, in accord with the effective normal stress principle. The fluids were not initially present in the slip zone, but were released by decarbonation (dolomite and Mg-rich calcite) and dehydration (gypsum) reactions, both activated by

  5. Fault Lubrication and Earthquake Propagation in Thermally Unstable Rocks

    NASA Astrophysics Data System (ADS)

    de Paola, N.; Hirose, T.; Mitchell, T. M.; di Toro, G.; Viti, C.; Shimamoto, T.

    2009-12-01

    During earthquake propagation in thermally unstable rocks, the frictional heat generated can induce thermal reactions which lead to chemical and physical changes in the slip zone. We performed laboratory friction experiments on thermally unstable minerals (gypsum, dolomite and calcite) at about 1 m/s slip velocities, more than 1 m displacements and calculated temperature rise above 500 C degrees. These conditions are typical during the propagation of large earthquakes. The main findings of our experimental work are: 1) Dramatic fault weakening is characterized by a dynamic frictional strength drop up to 90% of the initial static value in the Byerlee’s range. 2) Seismic source parameters, calculated from our experimental results, match those obtained by modelling of seismological data from the 1997 Cofliorito earthquake nucleated in carbonate rocks in Italy (i.e. same rocks used in the friction experiments). Fault lubrication observed during the experiments is controlled by the superposition of multiple, thermally-activated, slip weakening mechanisms (e.g., flash heating, thermal pressurization and nanoparticle lubrication). The integration of mechanical and CO2 emission data, temperature rise calculations and XRPD analyses suggests that flash heating is not the main dynamic slip weakening process. This process was likely inhibited very soon (t < 1s) for displacements d < 0.20 m, when intense grain size reduction by both cataclastic and chemical/thermal processes took place. Conversely, most of the dynamic weakening observed was controlled by thermal pressurization and nanoparticle lubrication processes. The dynamic shear strength of experimental faults was reduced when fluids (CO2, H2O) were trapped and pressurized within the slip zone, in accord with the effective normal stress principle. The fluids were not initially present in the slip zone, but were released by decarbonation (dolomite and Mg-rich calcite) and dehydration (gypsum) reactions, both activated by

  6. Changes in the Seismicity and Focal Mechanism of Small Earthquakes Prior to an MS 6.7 Earthquake in the Central Aleutian Island Arc

    USGS Publications Warehouse

    Billington, Serena; Engdahl, E.R.; Price, Stephanie

    1981-01-01

    On November 4 1977, a magnitude Ms 6.7 (mb 5.7) shallow-focus thrust earthquake occurred in the vicinity of the Adak seismographic network in the central Aleutian island arc. The earthquake and its aftershock sequence occurred in an area that had not experienced a similar sequence since at least 1964. About 13 1/2 months before the main shock, the rate of occurrence of very small magnitude earthquakes increased abruptly in the immediate vicinity of the impending main shock. To search for possible variations in the focal mechanism of small events preceding the main shock, a method was developed that objectively combines first-motion data to generate composite focal-mechanism information about events occurring within a small source region. The method could not be successfully applied to the whole study area, but the results show that starting about 10 1/2 months before the November 1977 earthquake, there was a change in the mechanism of small- to moderate-sized earthquakes in the immediate vicinity of the hypocenter and possibly in other parts of the eventual aftershock zone, but not in the surrounding regions.

  7. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake.

    PubMed

    Kato, Aitaro; Fukuda, Jun'ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-04-25

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase.

  8. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake

    NASA Astrophysics Data System (ADS)

    Kato, Aitaro; Fukuda, Jun'Ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-04-01

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase.

  9. Accelerated nucleation of the 2014 Iquique, Chile Mw 8.2 Earthquake

    PubMed Central

    Kato, Aitaro; Fukuda, Jun’ichi; Kumazawa, Takao; Nakagawa, Shigeki

    2016-01-01

    The earthquake nucleation process has been vigorously investigated based on geophysical observations, laboratory experiments, and theoretical studies; however, a general consensus has yet to be achieved. Here, we studied nucleation process for the 2014 Iquique, Chile Mw 8.2 megathrust earthquake located within the current North Chile seismic gap, by analyzing a long-term earthquake catalog constructed from a cross-correlation detector using continuous seismic data. Accelerations in seismicity, the amount of aseismic slip inferred from repeating earthquakes, and the background seismicity, accompanied by an increasing frequency of earthquake migrations, started around 270 days before the mainshock at locations up-dip of the largest coseismic slip patch. These signals indicate that repetitive sequences of fast and slow slip took place on the plate interface at a transition zone between fully locked and creeping portions. We interpret that these different sliding modes interacted with each other and promoted accelerated unlocking of the plate interface during the nucleation phase. PMID:27109362

  10. Earthquake imprints on a lacustrine deltaic system: the Kürk Delta along the East Anatolian Fault (Turkey)

    NASA Astrophysics Data System (ADS)

    Hubert-Ferrari, Aurélia; El-Ouahabi, Meriam; Garcia-Moreno, David; Avsar, Ulas; Altinok, Sevgi; Schmidt, Sabine; Cagatay, Namik

    2016-04-01

    Delta contains a sedimentary record primarily indicative of water level changes, but particularly sensitive to earthquake shaking, which results generally in soft-sediment-deformation structures. The Kürk Delta adjacent to a major strike-slip fault displays this type of deformation (Hempton and Dewey, 1983) as well as other types of earthquake fingerprints that are specifically investigated. This lacustrine delta stands at the south-western extremity of the Hazar Lake and is bound by the East Anatolian Fault (EAF), which generated earthquakes of magnitude 7 in eastern Turkey. Water level changes and earthquake shaking affecting the Kurk Delta have been reevaluated combining geophysical data (seismic-reflection profiles and side-scan sonar), remote sensing images, historical data, onland outcrops and offshore coring. The history of water level changes provides a temporal framework regarding the sedimentological record. In addition to the commonly soft-sediment-deformation previously documented, the onland outcrops reveal a record of deformation (faults and clastic dykes) linked to large earthquake-induced liquefactions. The recurrent liquefaction structures can be used to obtain a paleoseismological record. Five event horizons were identified that could be linked to historical earthquakes occurring in the last 1000 years along the EAF. Sedimentary cores sampling the most recent subaqueous sedimentation revealed the occurrence of another type of earthquake fingerprint. Based on radionuclide dating (137Cs and 210Pb), two major sedimentary events were attributed to the 1874-1875 earthquake sequence along the EAF. Their sedimentological characteristics were inferred based X-ray imagery, XRD, LOI, grain-size distribution, geophysical measurements. The events are interpreted to be hyperpycnal deposits linked to post-seismic sediment reworking of earthquake-triggered landslides. A time constraint regarding this sediment remobilization process could be achieved thanks to

  11. Near Space Tracking of the EM Phenomena Associated with the Main Earthquakes

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Taylor, Patrick; Bryant, Nevin; Pulinets, Sergey; Liu, Jann-Yenq; Yang, Kwang-Su

    2004-01-01

    Searching for electromagnetic (EM) phenomena originating in the Earth's crust prior to major earthquakes (M>5) are the object of this exploratory study. We present the idea of a possible relationship between: (1) electro-chemical and thermodynamic processes in the Earth's crust and (2) ionic enhancement of the atmosphere/ionosphere with tectonic stress and earthquake activity. The major source of these signals are proposed to originate from electromagnetic phenomenon which are responsible for these observed pre-seismic processes, such as, enhanced IR emission, also born as thermal anomalies, generation of long wave radiation, light emission caused by ground-to-air electric discharges, Total Electron Content (TEC) ionospheric anomalies and ionospheric plasma variations. The source of these data will include: (i) ionospheric plasma perturbations data from the recently launched DEMETER mission and currently available TEC/GPS network data; (ii) geomagnetic data from ORSTED and CHAMP; (iii) Thermal infra-red (TIR) transients mapped by the polar orbiting (NOAA/AVHRR, MODIS) and (iv) geosynchronous weather satellites measurements of GOES, METEOSAT. This approach requires continues observations and data collecting, in addition to both ground and space based monitoring over selected regions in order to investigate the various techniques for recording possible anomalies. During the space campaign emphasis will be on IR emission, obtained from TIR (thermal infrared) satellites, that records land/sea surface temperature anomalies and changes in the plasma and total electron content (TEC) of the ionosphere that occur over areas of potential earthquake activity.

  12. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  13. Compiling an earthquake catalogue for the Arabian Plate, Western Asia

    NASA Astrophysics Data System (ADS)

    Deif, Ahmed; Al-Shijbi, Yousuf; El-Hussain, Issa; Ezzelarab, Mohamed; Mohamed, Adel M. E.

    2017-10-01

    The Arabian Plate is surrounded by regions of relatively high seismicity. Accounting for this seismicity is of great importance for seismic hazard and risk assessments, seismic zoning, and land use. In this study, a homogenous earthquake catalogue of moment-magnitude (Mw) for the Arabian Plate is provided. The comprehensive and homogenous earthquake catalogue provided in the current study spatially involves the entire Arabian Peninsula and neighboring areas, covering all earthquake sources that can generate substantial hazard for the Arabian Plate mainland. The catalogue extends in time from 19 to 2015 with a total number of 13,156 events, of which 497 are historical events. Four polygons covering the entire Arabian Plate were delineated and different data sources including special studies, local, regional and international catalogues were used to prepare the earthquake catalogue. Moment magnitudes (Mw) that provided by original sources were given the highest magnitude type priority and introduced to the catalogues with their references. Earthquakes with magnitude differ from Mw were converted into this scale applying empirical relationships derived in the current or in previous studies. The four polygons catalogues were included in two comprehensive earthquake catalogues constituting the historical and instrumental periods. Duplicate events were identified and discarded from the current catalogue. The present earthquake catalogue was declustered in order to contain only independent events and investigated for the completeness with time of different magnitude spans.

  14. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  15. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-02

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  16. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  17. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  18. Populating the Advanced National Seismic System Comprehensive Earthquake Catalog

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Perry, M. R.; Andrews, J. R.; Withers, M. M.; Hellweg, M.; Kim, W. Y.; Shiro, B.; West, M. E.; Storchak, D. A.; Pankow, K. L.; Huerfano Moreno, V. A.; Gee, L. S.; Wolfe, C. J.

    2016-12-01

    The U.S. Geological Survey maintains a repository of earthquake information produced by networks in the Advanced National Seismic System with additional data from the ISC-GEM catalog and many non-U.S. networks through their contributions to the National Earthquake Information Center PDE bulletin. This Comprehensive Catalog (ComCat) provides a unified earthquake product while preserving attribution and contributor information. ComCat contains hypocenter and magnitude information with supporting phase arrival-time and amplitude measurements (when available). Higher-level products such as focal mechanisms, earthquake slip models, "Did You Feel It?" reports, ShakeMaps, PAGER impact estimates, earthquake summary posters, and tectonic summaries are also included. ComCat is updated as new events are processed and the catalog can be accesed at http://earthquake.usgs.gov/earthquakes/search/. Throughout the past few years, a concentrated effort has been underway to expand ComCat by integrating global and regional historic catalogs. The number of earthquakes in ComCat has more than doubled in the past year and it presently contains over 1.6 million earthquake hypocenters. We will provide an overview of catalog contents and a detailed description of numerous tools and semi-automated quality-control procedures developed to uncover errors including systematic magnitude biases, missing time periods, duplicate postings for the same events, and incorrectly associated events.

  19. Earthquake source nucleation process in the zone of a permanently creeping deep fault

    NASA Astrophysics Data System (ADS)

    Lykov, V. I.; Mostryukov, A. O.

    2008-10-01

    The worldwide practice of earthquake prediction, whose beginning relates to the 1970s, shows that spatial manifestations of various precursors under real seismotectonic conditions are very irregular. As noted in [Kurbanov et al., 1980], zones of bending, intersection, and branching of deep faults, where conditions are favorable for increasing tangential tectonic stresses, serve as “natural amplifiers” of precursory effects. The earthquake of September 28, 2004, occurred on the Parkfield segment of the San Andreas deep fault in the area of a local bending of its plane. The fault segment about 60 km long and its vicinities are the oldest prognostic area in California. Results of observations before and after the earthquake were promptly analyzed and published in a special issue of Seismological Research Letters (2005, Vol. 76, no. 1). We have an original method enabling the monitoring of the integral rigidity of seismically active rock massifs. The integral rigidity is determined from the relative numbers of brittle and viscous failure acts during the formation of source ruptures of background earthquakes in a given massif. Fracture mechanisms are diagnosed from the steepness of the first arrival of the direct P wave. Principles underlying our method are described in [Lykov and Mostryukov, 1996, 2001, 2003]. Results of monitoring have been directly displayed at the site of the Laboratory ( http://wwwbrk.adm.yar.ru/russian/1_512/index.html ) since the mid-1990s. It seems that this information has not attracted the attention of American seismologists. This paper assesses the informativeness of the rigidity monitoring at the stage of formation of a strong earthquake source in relation to other methods.

  20. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    USGS Publications Warehouse

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.