Sample records for earthquake origin time

  1. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  2. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  3. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  4. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica

  5. Real-time earthquake data feasible

    NASA Astrophysics Data System (ADS)

    Bush, Susan

    Scientists agree that early warning devices and monitoring of both Hurricane Hugo and the Mt. Pinatubo volcanic eruption saved thousands of lives. What would it take to develop this sort of early warning and monitoring system for earthquake activity?Not all that much, claims a panel assigned to study the feasibility, costs, and technology needed to establish a real-time earthquake monitoring (RTEM) system. The panel, drafted by the National Academy of Science's Committee on Seismology, has presented its findings in Real-Time Earthquake Monitoring. The recently released report states that “present technology is entirely capable of recording and processing data so as to provide real-time information, enabling people to mitigate somewhat the earthquake disaster.” RTEM systems would consist of two parts—an early warning system that would give a few seconds warning before severe shaking, and immediate postquake information within minutes of the quake that would give actual measurements of the magnitude. At this time, however, this type of warning system has not been addressed at the national level for the United States and is not included in the National Earthquake Hazard Reduction Program, according to the report.

  6. Exploring Earthquakes in Real-Time

    NASA Astrophysics Data System (ADS)

    Bravo, T. K.; Kafka, A. L.; Coleman, B.; Taber, J. J.

    2013-12-01

    Earthquakes capture the attention of students and inspire them to explore the Earth. Adding the ability to view and explore recordings of significant and newsworthy earthquakes in real-time makes the subject even more compelling. To address this opportunity, the Incorporated Research Institutions for Seismology (IRIS), in collaboration with Moravian College, developed ';jAmaSeis', a cross-platform application that enables students to access real-time earthquake waveform data. Students can watch as the seismic waves are recorded on their computer, and can be among the first to analyze the data from an earthquake. jAmaSeis facilitates student centered investigations of seismological concepts using either a low-cost educational seismograph or streamed data from other educational seismographs or from any seismic station that sends data to the IRIS Data Management System. After an earthquake, students can analyze the seismograms to determine characteristics of earthquakes such as time of occurrence, distance from the epicenter to the station, magnitude, and location. The software has been designed to provide graphical clues to guide students in the analysis and assist in their interpretations. Since jAmaSeis can simultaneously record up to three stations from anywhere on the planet, there are numerous opportunities for student driven investigations. For example, students can explore differences in the seismograms from different distances from an earthquake and compare waveforms from different azimuthal directions. Students can simultaneously monitor seismicity at a tectonic plate boundary and in the middle of the plate regardless of their school location. This can help students discover for themselves the ideas underlying seismic wave propagation, regional earthquake hazards, magnitude-frequency relationships, and the details of plate tectonics. The real-time nature of the data keeps the investigations dynamic, and offers students countless opportunities to explore.

  7. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  8. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  9. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  10. Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.

    2015-12-01

    The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.

  11. Space and time distribution of foci and source mechanisms of West-Bohemia/Vogtland earthquake swarms - a tool for understanding of their origin

    NASA Astrophysics Data System (ADS)

    Horálek, Josef; Čermáková, Hana; Fischer, Tomáš

    2014-05-01

    The origin of earthquake swarms remains still an enigma. The swarms typically accompany volcanic activity at the plate margins but also occur in intracontinental areas. West Bohemia-Vogtland (border area between Czech Republic and Germany) represents one of the most active intraplate earthquake-swarm regions in Europe. Above, this area is characteristic by high activity of crustal fluids. Swarm earthquakes occur persistently in the area of about 3 000 km2. However, the Novö Kostel focal zone (NK), which shows a few tens of thousands events within the last twenty years, dominates the recent seismicity of the whole region. There were swarms in 1997, 2000, 2008 and 20011 followed by reactivation in 2013, and a few tens of microswarms which forming a focal belt of about 15 x 6 km. We analyse geometry of the NK focal zone applying the double-difference method to seismicity in the period 1997 - 2013. The swarms are located close to each other in at depths from 6 to 13 km. The 2000 (MLmax = 3.3) and 2008 (MLmax = 3.8) swarms are 'twins' i.e. their hypocenters fall precisely on the same portion of the NK fault; similarly the 1997 (MLmax = 2.9), 2011 (MLmax = 3.6) and 2013 (MLmax = 2.4) swarms also occurred on the same fault segment. However, the individual swarms differ considerably in their evolution, mainly in the rate of the seismic-moment release and foci migration. Source mechanisms (in the full moment-tensor description) and their time and space variations also show different patterns. All the 2000- and 2008-swarm events are pure shears, signifying both oblique-normal and oblique-thrust faulting but the former prevails. We found a several families of source mechanisms, which fit well geometry of respective fault segments being determined on the basis of the event location: The 2000 and 2008 swarms activated the same portion of the NK fault, hence the source mechanisms are similar. The 1997 and 2011 swarms took place on two differently oriented fault segments, thus

  12. Bayesian historical earthquake relocation: an example from the 1909 Taipei earthquake

    USGS Publications Warehouse

    Minson, Sarah E.; Lee, William H.K.

    2014-01-01

    Locating earthquakes from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both large phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the earthquake. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei earthquake. While our epicentre location and origin time for the 1909 Taipei earthquake are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.

  13. The 2014 Mw 6.0 Napa Earthquake, California: Observations from Real-time GPS-enhanced Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Johanson, I. A.; Grapenthin, R.; Allen, R. M.

    2014-12-01

    Recently, progress has been made to demonstrate feasibility and benefits of including real-time GPS (rtGPS) in earthquake early warning and rapid response systems. While most concepts have yet to be integrated into operational environments, the Berkeley Seismological Laboratory is currently running an rtGPS based finite fault inversion scheme in true real-time, which is triggered by the seismic-based ShakeAlert system and then sends updated earthquake alerts to a test receiver. The Geodetic Alarm System (G-larmS) was online and responded to the 2014 Mw6.0 South Napa earthquake in California. We review G-larmS' performance during this event and for 13 aftershocks, and we present rtGPS observations and real-time modeling results for the main shock. The first distributed slip model and a magnitude estimate of Mw5.5 were available 24 s after the event origin time, which could be reduced to 14 s after a bug fix (~8 s S-wave travel time, ~6 s data latency). The system continued to re-estimate the magnitude once every second: it increased to Mw5.9 3 s after the first alert and stabilized at Mw5.8 after 15 s. G-larmS' solutions for the subsequent small magnitude aftershocks demonstrate that Mw~6.0 is the current limit for alert updates to contribute back to the seismic-based early warning system.

  14. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  15. Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr

    Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.

  16. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006

  17. Performance of Real-time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, H.; Horiuchi, S.; Wu, C.; Yamamoto, S.

    2008-12-01

    on regions and tends to increase when earthquakes occurred outward the network. Depth differences for 70 percent of events are less than 20km and original time differences for 48 percent within one second. In addition to JMA magnitude (MJMA), which is estimated from moment magnitude, REIS estimates a new scaling parameter called intensity magnitude (MI), which is defined from observed P wave seismic intensity (Yamamoto et al., 2008). Our statistical results show that these two kinds of magnitudes are reasonably determined. Either MJMA or MI by REIS for 94 percent of events has differences less than 1.0 compared with reported JMA catalog. However, the difference increases with values of the magnitude. There is an apparent underestimation of MJMA for large earthquakes because the first report is issued when the rupture is still undergoing. Moreover, there are cases when most of Hi-net seismograms close to epicenter are clipped, but still these data are used for the determination of the lower limit of magnitude. We are making an EEWS using real-time strong motion network data for the better estimate of earthquake magnitude and seismic intensity.

  18. Universal Recurrence Time Statistics of Characteristic Earthquakes

    NASA Astrophysics Data System (ADS)

    Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.

    2006-12-01

    Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.

  19. Time-decreasing hazard and increasing time until the next earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corral, Alvaro

    2005-01-01

    The existence of a slowly always decreasing probability density for the recurrence times of earthquakes in the stationary case implies that the occurrence of an event at a given instant becomes more unlikely as time since the previous event increases. Consequently, the expected waiting time to the next earthquake increases with the elapsed time, that is, the event moves away fast to the future. We have found direct empirical evidence of this counterintuitive behavior in two worldwide catalogs as well as in diverse regional catalogs. Universal scaling functions describe the phenomenon well.

  20. Evaluating the Real-time and Offline Performance of the Virtual Seismologist Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.

    2009-04-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4

  1. Novel Algorithms Enabling Rapid, Real-Time Earthquake Monitoring and Tsunami Early Warning Worldwide

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Michelini, A.

    2012-12-01

    We have introduced recently new methods to determine rapidly the tsunami potential and magnitude of large earthquakes (e.g., Lomax and Michelini, 2009ab, 2011, 2012). To validate these methods we have implemented them along with other new algorithms within the Early-est earthquake monitor at INGV-Rome (http://early-est.rm.ingv.it, http://early-est.alomax.net). Early-est is a lightweight software package for real-time earthquake monitoring (including phase picking, phase association and event detection, location, magnitude determination, first-motion mechanism determination, ...), and for tsunami early warning based on discriminants for earthquake tsunami potential. In a simulation using archived broadband seismograms for the devastating M9, 2011 Tohoku earthquake and tsunami, Early-est determines: the epicenter within 3 min after the event origin time, discriminants showing very high tsunami potential within 5-7 min, and magnitude Mwpd(RT) 9.0-9.2 and a correct shallow-thrusting mechanism within 8 min. Real-time monitoring with Early-est givess similar results for most large earthquakes using currently available, real-time seismogram data. Here we summarize some of the key algorithms within Early-est that enable rapid, real-time earthquake monitoring and tsunami early warning worldwide: >>> FilterPicker - a general purpose, broad-band, phase detector and picker (http://alomax.net/FilterPicker); >>> Robust, simultaneous association and location using a probabilistic, global-search; >>> Period-duration discriminants TdT0 and TdT50Ex for tsunami potential available within 5 min; >>> Mwpd(RT) magnitude for very large earthquakes available within 10 min; >>> Waveform P polarities determined on broad-band displacement traces, focal mechanisms obtained with the HASH program (Hardebeck and Shearer, 2002); >>> SeisGramWeb - a portable-device ready seismogram viewer using web-services in a browser (http://alomax.net/webtools/sgweb/info.html). References (see also: http

  2. Testing the structure of earthquake networks from multivariate time series of successive main shocks in Greece

    NASA Astrophysics Data System (ADS)

    Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.

    2018-06-01

    The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.

  3. Real-Time Earthquake Monitoring with Spatio-Temporal Fields

    NASA Astrophysics Data System (ADS)

    Whittier, J. C.; Nittel, S.; Subasinghe, I.

    2017-10-01

    With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.

  4. Possibility of the real-time dynamic strain field monitoring deduced from GNSS data: case study of the 2016 Kumamoto earthquake sequence

    NASA Astrophysics Data System (ADS)

    Ohta, Y.; Ohzono, M.; Takahashi, H.; Kawamoto, S.; Hino, R.

    2017-12-01

    A large and destructive earthquake (Mjma 7.3) occurred on April 15, 2016 in Kumamoto region, southwestern Japan. This earthquake was accompanied approximately 32 s later by an M 6 earthquake in central Oita region, which hypocenter located 80 km northeast from the hypocenter of the mainshock of the Kumamoto earthquake. This triggered earthquake also had the many aftershocks in and around the Oita region. It is important to understand how to occur such chain-reacted earthquake sequences. We used the 1Hz dual-frequency phase and range data from GEONET in Kyushu island. The data were processed using GIPSY-OASIS (version 6.4). We adopoted kinematic PPP strategy for the coordinate estimation. The reference GPS satellite orbit and 5 s clock information were obtained using the CODE product. We also applied simple sidereal filter technique for the estimated time series. Based on the obtained 1Hz GNSS time series, we estimated the areal strain and principle strain field using the method of the Shen et al. (1996). For the assessment of the dynamic strain, firstly we calculated the averaged absolute value of areal strain field between 60-85s after the origin time of the mainshock of the Kumamoto earthquake which was used as the "reference" static strain field. Secondly, we estimated the absolute value of areal strain in each time step. Finally, we calculated the strain ratio in each time step relative to the "reference". Based on this procedure, we can extract the spatial and temporal characteristic of the dynamic strain in each time step. Extracted strain ratio clearly shows the spatial and temporal dynamic strain characteristic. When an attention is paid to a region of triggered Oita earthquake, the timing of maximum dynamic strain ratio in the epicenter just corresponds to the origin time of the triggered event. It strongly suggested that the large dynamic strain may trigger the Oita event. The epicenter of the triggered earthquake located within the geothermal region. In

  5. Real-time neural network earthquake profile predictor

    DOEpatents

    Leach, R.R.; Dowla, F.U.

    1996-02-06

    A neural network has been developed that uses first-arrival energy to predict the characteristics of impending earthquake seismograph signals. The propagation of ground motion energy through the earth is a highly nonlinear function. This is due to different forms of ground motion as well as to changes in the elastic properties of the media throughout the propagation path. The neural network is trained using seismogram data from earthquakes. Presented with a previously unseen earthquake, the neural network produces a profile of the complete earthquake signal using data from the first seconds of the signal. This offers a significant advance in the real-time monitoring, warning, and subsequent hazard minimization of catastrophic ground motion. 17 figs.

  6. Real-time neural network earthquake profile predictor

    DOEpatents

    Leach, Richard R.; Dowla, Farid U.

    1996-01-01

    A neural network has been developed that uses first-arrival energy to predict the characteristics of impending earthquake seismograph signals. The propagation of ground motion energy through the earth is a highly nonlinear function. This is due to different forms of ground motion as well as to changes in the elastic properties of the media throughout the propagation path. The neural network is trained using seismogram data from earthquakes. Presented with a previously unseen earthquake, the neural network produces a profile of the complete earthquake signal using data from the first seconds of the signal. This offers a significant advance in the real-time monitoring, warning, and subsequent hazard minimization of catastrophic ground motion.

  7. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  8. Real-time earthquake monitoring using a search engine method.

    PubMed

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  9. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    USGS Publications Warehouse

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  10. Combining multiple earthquake models in real time for earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.

    2017-01-01

    The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.

  11. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    USGS Publications Warehouse

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances <200  km. Retrospective application of this algorithm to the 2011 Tohoku earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  12. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models: 2. Laboratory earthquakes

    NASA Astrophysics Data System (ADS)

    Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.

    2012-02-01

    The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

  13. Real-time earthquake monitoring: Early warning and rapid response

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A panel was established to investigate the subject of real-time earthquake monitoring (RTEM) and suggest recommendations on the feasibility of using a real-time earthquake warning system to mitigate earthquake damage in regions of the United States. The findings of the investigation and the related recommendations are described in this report. A brief review of existing real-time seismic systems is presented with particular emphasis given to the current California seismic networks. Specific applications of a real-time monitoring system are discussed along with issues related to system deployment and technical feasibility. In addition, several non-technical considerations are addressed including cost-benefit analysis, public perceptions, safety, and liability.

  14. Earthquake forecasting studies using radon time series data in Taiwan

    NASA Astrophysics Data System (ADS)

    Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong

    2017-04-01

    For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.

  15. Analysis of post-earthquake reconstruction for Wenchuan earthquake based on night-time light data from DMSP/OLS

    NASA Astrophysics Data System (ADS)

    Cao, Yang; Zhang, Jing; Yang, Mingxiang; Lei, Xiaohui

    2017-07-01

    At present, most of Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) night-time light data are applied to large-scale regional development assessment, while there are little for the study of earthquake and other disasters. This study has extracted night-time light information before and after earthquake within Wenchuan county with adoption of DMSP/OLS night-time light data. The analysis results show that the night-time light index and average intensity of Wenchuan county were decreased by about 76% and 50% respectively from the year of 2007 to 2008. From the year of 2008 to 2011, the two indicators were increased by about 200% and 556% respectively. These research results show that the night-time light data can be used to extract the information of earthquake and evaluate the occurrence of earthquakes and other disasters.

  16. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2016-12-01

    Sequence of the 2016 Kumamoto earthquakes (Mw6.2 on April 14, Mw7.0 on April 16, and many aftershocks) caused a devastating damage at Kumamoto and Oita prefectures, Japan. During the Mw7.0 event, just after the direct S waves passing the central Oita, another M6 class event occurred there more than 80 km apart from the Mw7.0 event. The M6 event is interpreted as an induced earthquake; but it brought stronger shaking at the central Oita than that from the Mw7.0 event. We will discuss the induced earthquake from viewpoint of Earthquake Early Warning. In terms of ground shaking such as PGA and PGV, the Mw7.0 event is much smaller than those of the M6 induced earthquake at the central Oita (for example, 1/8 smaller at OIT009 station for PGA), and then it is easy to discriminate two events. However, PGD of the Mw7.0 is larger than that of the induced earthquake, and its appearance is just before the occurrence of the induced earthquake. It is quite difficult to recognize the induced earthquake from displacement waveforms only, because the displacement is strongly contaminated by that of the preceding Mw7.0 event. In many methods of EEW (including current JMA EEW system), magnitude is used for prediction of ground shaking through Ground Motion Prediction Equation (GMPE) and the magnitude is often estimated from displacement. However, displacement magnitude does not necessarily mean the best one for prediction of ground shaking, such as PGA and PGV. In case of the induced earthquake during the Kumamoto earthquake, displacement magnitude could not be estimated because of the strong contamination. Actually JMA EEW system could not recognize the induced earthquake. One of the important lessons we learned from eight years' operation of EEW is an issue of the multiple simultaneous earthquakes, such as aftershocks of the 2011 Mw9.0 Tohoku earthquake. Based on this lesson, we have proposed enhancement of real-time monitor of ground shaking itself instead of rapid estimation of

  17. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    NASA Astrophysics Data System (ADS)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  18. Re-examination of the original questionnaire documents for the 1944 Tonankai, 1945 Mikawa, and 1946 Nanaki earthquakes

    NASA Astrophysics Data System (ADS)

    Harada, Tomoya; Satake, Kenji; Furumura, Takashi

    2016-04-01

    With the object of estimating seismic intensity, the Earthquakes Research Institute (ERI) of the University of Tokyo performed questionnaire surveys for the significant (destructive or large/great) earthquakes from 1943 to 1988 (Kayano, 1990, BERI). In these surveys, Kawasumi (1943)'s 12-class seismic intensity scale similar to the Modified Mercalli scale (MM-scale) was used. Survey results for earthquakes after 1950 were well investigated and published (e.g. Kayano and Komaki, 1977, BERI; Kayano and Sato, 1975, BERI), but the survey results for earthquakes in the 1940s have not been published and original documents of the surveys was missing. Recently, the original sheets of the surveys for the five earthquakes in the 1940s with more than 1,000 casualties were discovered in the ERI warehouse, although they are incomplete (Tsumura et al, 2010). They are from the 1943 Tottori (M 7.2), 1944 Tonankai (M 7.9), 1945 Mikawa (M 6.8), 1946 Nankai (M 8.0), and 1948 Fukui (M 7.1) earthquakes. In this study, we examined original questionnaire and summary sheets for the 1944 Tonankai, 1945 Mikawa, and 1946 Nanaki earthquakes, and estimated the distributions of seismic intensity, various kinds of damage, and human behaviors in detail. Numbers of the survey points for the 1944, 1945, and 1946 event are 287, 145, and 1,014, respectively. The numbers for the 1944 and 1945 earthquakes are much fewer than that of the 1946 event, because they occurred during the last years of World War II. The 1944 seismic intensities in the prefectures near the source region (Aichi, Mie, Shizuoka, and Gifu Pref.) tend to be high. However, the 1944 intensities are also high and damage is serious at the Suwa Lake shore in Nagano Pref. which is about 240 km far from the source region because seismic waves are amplified dramatically in the thick sediment in the Suwa Basin. Seismic intensities of the 1945 Mikawa earthquake near the source region in Aichi Pref. were very high (X-XI). However, the

  19. Cortical origin of the 2007 Mw = 6.2 Aysén earthquake: surface rupture evidence and paleoseismological assessment

    NASA Astrophysics Data System (ADS)

    Villalobos, A.

    2015-12-01

    On 2007 April 21, a Mw = 6.2 earthquake hit the Aysén region, an area of low seismicity in southern Chile. This event corresponds to the main shock of a sequence of earthquakes that were felt from January 10, with a small earthquake of magnitude ML <3, to February 2008 as recurrent aftershocks. This area is characterized by the presence of the Liquiñe-Ofqui Fault System (LOFS), which corresponds to neotectonic feature and the main seismotectonic southern Chile. In this research we use improved sub-aqueous paleoseismological techniques with geomorphological evidence to constrain the seismogenic source of this event as cortical origin. It is established that the Punta Cola Fault, a dextral-reverse structure which exhibits in seismic profiles a complex fault zone with distinguished positive flower geometry, is responsible for the main shock. This fault caused vertical offsets that reached the seafloor generating fault scarps in a mass movement deposit triggered by the same earthquake. Following this idea, a model of surface rupture is proposed for this structure. Further evidence that this cortical phenomenon is not an isolated event in time is presented by paleoseismological trench-like mappings in sub-bottom profiles.

  20. Real-time earthquake monitoring using a search engine method

    PubMed Central

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-01-01

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861

  1. Real-Time Earthquake Analysis for Disaster Mitigation (READI) Network

    NASA Astrophysics Data System (ADS)

    Bock, Y.

    2014-12-01

    Real-time GNSS networks are making a significant impact on our ability to forecast, assess, and mitigate the effects of geological hazards. I describe the activities of the Real-time Earthquake Analysis for Disaster Mitigation (READI) working group. The group leverages 600+ real-time GPS stations in western North America operated by UNAVCO (PBO network), Central Washington University (PANGA), US Geological Survey & Scripps Institution of Oceanography (SCIGN project), UC Berkeley & US Geological Survey (BARD network), and the Pacific Geosciences Centre (WCDA project). Our goal is to demonstrate an earthquake and tsunami early warning system for western North America. Rapid response is particularly important for those coastal communities that are in the near-source region of large earthquakes and may have only minutes of warning time, and who today are not adequately covered by existing seismic and basin-wide ocean-buoy monitoring systems. The READI working group is performing comparisons of independent real time analyses of 1 Hz GPS data for station displacements and is participating in government-sponsored earthquake and tsunami exercises in the Western U.S. I describe a prototype seismogeodetic system using a cluster of southern California stations that includes GNSS tracking and collocation with MEMS accelerometers for real-time estimation of seismic velocity and displacement waveforms, which has advantages for improved earthquake early warning and tsunami forecasts compared to seismic-only or GPS-only methods. The READI working group's ultimate goal is to participate in an Indo-Pacific Tsunami early warning system that utilizes GNSS real-time displacements and ionospheric measurements along with seismic, near-shore buoys and ocean-bottom pressure sensors, where available, to rapidly estimate magnitude and finite fault slip models for large earthquakes, and then forecast tsunami source, energy scale, geographic extent, inundation and runup. This will require

  2. Rescaled earthquake recurrence time statistics: application to microrepeaters

    NASA Astrophysics Data System (ADS)

    Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru

    2009-01-01

    Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.

  3. Evaluation of the real-time earthquake information system in Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, Hiromitsu; Horiuchi, Shigeki; Wu, Changjiang; Yamamoto, Shunroku; Rydelek, Paul A.

    2009-01-01

    The real-time earthquake information system (REIS) of the Japanese seismic network is developed for automatically determining earthquake parameters within a few seconds after the P-waves arrive at the closest stations using both the P-wave arrival times and the timing data that P-waves have not yet arrived at other stations. REIS results play a fundamental role in the real-time information for earthquake early warning in Japan. We show the rapidity and accuracy of REIS from the analysis of 4,050 earthquakes in three years since 2005; 44 percent of the first reports are issued within 5 seconds after the first P-wave arrival and 80 percent of the events have a difference in epicenter distance less than 20 km relative to manually determined locations. We compared the formal catalog to the estimated magnitude from the real-time analysis and found that 94 percent of the events had a magnitude difference of +/-1.0 unit.

  4. Towards the Future "Earthquake" School in the Cloud: Near-real Time Earthquake Games Competition in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. H.; Liang, W. T.; Wu, Y. F.; Yen, E.

    2014-12-01

    To prevent the future threats of natural disaster, it is important to understand how the disaster happened, why lives were lost, and what lessons have been learned. By that, the attitude of society toward natural disaster can be transformed from training to learning. The citizen-seismologists-in-Taiwan project is designed to elevate the quality of earthquake science education by means of incorporating earthquake/tsunami stories and near-real time earthquake games competition into the traditional curricula in schools. Through pilot of courses and professional development workshops, we have worked closely with teachers from elementary, junior high, and senior high schools, to design workable teaching plans through a practical operation of seismic monitoring at home or school. We will introduce how the 9-years-old do P- and S-wave picking and measure seismic intensity through interactive learning platform, how do scientists and school teachers work together, and how do we create an environment to facilitate continuous learning (i.e., near-real time earthquake games competition), to make earthquake science fun.

  5. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  6. The performance review of EEWS(Earthquake Early Warning System) about Gyeongju earthquakes with Ml 5.1 and Ml 5.8 in Korea

    NASA Astrophysics Data System (ADS)

    Park, Jung-Ho; Chi, Heon-Cheol; Lim, In-Seub; Seong, Yun-Jeong; Park, Jihwan

    2017-04-01

    EEW(Earthquake Early Warning) service to the public has been officially operated by KMA (Korea Meteorological Administration) from 2015 in Korea. For the KMA's official EEW service, KIGAM has adopted ElarmS from UC Berkeley BSL and modified local magnitude relation, 1-D travel time curves and association procedures with real time waveform from about 201 seismic stations of KMA, KIGAM, KINS and KEPRI. There were two moderate size earthquakes with magnitude Ml 5.1 and Ml 5.8 close to Gyeongju city located at the southeastern part of Korea on Sep. 12. 2016. We have checked the performance of EEWS(Earthquake Early Warning System) named as TrigDB by KIGAM reviewing of these two Gyeongju earthquakes. The nearest station to epicenters of two earthquakes Ml 5.1(35.7697 N, 129.1904 E) and Ml 5.8(35.7632 N, 129.1898 E) was MKL which detected P phases in about 2.1 and 3.6 seconds after the origin times respectively. The first events were issued in 6.3 and 7.0 seconds from each origin time. Because of the unstable results on the early steps due to very few stations and unexpected automated analysis, KMA has the policy to wait for more 20 seconds for confirming the reliability. For these events KMA published EEW alarms in about 26 seconds after origin times with M 5.3 and M 5.9 respectively.

  7. Source time function properties indicate a strain drop independent of earthquake depth and magnitude.

    PubMed

    Vallée, Martin

    2013-01-01

    The movement of tectonic plates leads to strain build-up in the Earth, which can be released during earthquakes when one side of a seismic fault suddenly slips with respect to the other. The amount of seismic strain release (or 'strain drop') is thus a direct measurement of a basic earthquake property, that is, the ratio of seismic slip over the dimension of the ruptured fault. Here the analysis of a new global catalogue, containing ~1,700 earthquakes with magnitude larger than 6, suggests that strain drop is independent of earthquake depth and magnitude. This invariance implies that deep earthquakes are even more similar to their shallow counterparts than previously thought, a puzzling finding as shallow and deep earthquakes are believed to originate from different physical mechanisms. More practically, this property contributes to our ability to predict the damaging waves generated by future earthquakes.

  8. Intrastab Earthquakes: Dehydration of the Cascadia Slab

    USGS Publications Warehouse

    Preston, L.A.; Creager, K.C.; Crosson, R.S.; Brocher, T.M.; Trehu, A.M.

    2003-01-01

    We simultaneously invert travel times of refracted and wide-angle reflected waves for three-dimensional compressional-wave velocity structure, earthquake locations, and reflector geometry in northwest Washington state. The reflector, interpreted to be the crust-mantle boundary (Moho) of the subducting Juan de Fuca plate, separates intrastab earthquakes into two groups, permitting a new understanding of the origins of intrastab earthquakes in Cascadia. Earthquakes up-dip of the Moho's 45-kilometer depth contour occur below the reflector, in the subducted oceanic mantle, consistent with serpentinite dehydration; earthquakes located down-dip occur primarily within the subducted crust, consistent with the basalt-to-eclogite transformation.

  9. Earthquake prediction in Japan and natural time analysis of seismicity

    NASA Astrophysics Data System (ADS)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  10. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the

  11. Time-dependent earthquake forecasting: Method and application to the Italian region

    NASA Astrophysics Data System (ADS)

    Chan, C.; Sorensen, M. B.; Grünthal, G.; Hakimhashemi, A.; Heidbach, O.; Stromeyer, D.; Bosse, C.

    2009-12-01

    We develop a new approach for time-dependent earthquake forecasting and apply it to the Italian region. In our approach, the seismicity density is represented by a bandwidth function as a smoothing Kernel in the neighboring region of earthquakes. To consider the fault-interaction-based forecasting, we calculate the Coulomb stress change imparted by each earthquake in the study area. From this, the change of seismicity rate as a function of time can be estimated by the concept of rate-and-state stress transfer. We apply our approach to the region of Italy and earthquakes that occurred before 2003 to generate the seismicity density. To validate our approach, we compare our estimated seismicity density with the distribution of earthquakes with M≥3.8 after 2004. A positive correlation is found and all of the examined earthquakes locate in the area of the highest 66 percentile of seismicity density in the study region. Furthermore, the seismicity density corresponding to the epicenter of the 2009 April 6, Mw = 6.3, L’Aquila earthquake is in the area of the highest 5 percentile. For the time-dependent seismicity rate change, we estimate the rate-and-state stress transfer imparted by the M≥5.0 earthquakes occurred in the past 50 years. It suggests that the seismicity rate has increased at the locations of 65% of the examined earthquakes. Applying this approach to the L’Aquila sequence by considering seven M≥5.0 aftershocks as well as the main shock, not only spatial but also temporal forecasting of the aftershock distribution is significant.

  12. Twitter Seismology: Earthquake Monitoring and Response in a Social World

    NASA Astrophysics Data System (ADS)

    Bowden, D. C.; Earle, P. S.; Guy, M.; Smoczyk, G.

    2011-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment USGS earthquake response products and the delivery of hazard information. The potential uses of Twitter for earthquake response include broadcasting earthquake alerts, rapidly detecting widely felt events, qualitatively assessing earthquake damage effects, communicating with the public, and participating in post-event collaboration. Several seismic networks and agencies are currently distributing Twitter earthquake alerts including the European-Mediterranean Seismological Centre (@LastQuake), Natural Resources Canada (@CANADAquakes), and the Indonesian meteorological agency (@infogempabmg); the USGS will soon distribute alerts via the @USGSted and @USGSbigquakes Twitter accounts. Beyond broadcasting alerts, the USGS is investigating how to use tweets that originate near the epicenter to detect and characterize shaking events. This is possible because people begin tweeting immediately after feeling an earthquake, and their short narratives and exclamations are available for analysis within 10's of seconds of the origin time. Using five months of tweets that contain the word "earthquake" and its equivalent in other languages, we generate a tweet-frequency time series. The time series clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a simple Short-Term-Average / Long-Term-Average algorithm similar to that commonly used to detect seismic phases. As with most auto-detection algorithms, the parameters can be tuned to catch more or less events at the cost of more or less false triggers. When tuned to a moderate sensitivity, the detector found 48 globally-distributed, confirmed seismic events with only 2 false triggers. A space-shuttle landing and "The Great California ShakeOut" caused the false triggers. This number of

  13. FORESHOCKS AND TIME-DEPENDENT EARTHQUAKE HAZARD ASSESSMENT IN SOUTHERN CALIFORNIA.

    USGS Publications Warehouse

    Jones, Lucile M.

    1985-01-01

    The probability that an earthquake in southern California (M greater than equivalent to 3. 0) will be followed by an earthquake of larger magnitude within 5 days and 10 km (i. e. , will be a foreshock) is 6 plus or minus 0. 5 per cent (1 S. D. ), and is not significantly dependent on the magnitude of the possible foreshock between M equals 3 and M equals 5. The probability that an earthquake will be followed by an M greater than equivalent to 5. 0 main shock, however, increases with magnitude of the foreshock from less than 1 per cent at M greater than equivalent to 3 to 6. 5 plus or minus 2. 5 per cent (1 S. D. ) at M greater than equivalent to 5. The main shock will most likely occur in the first hour after the foreshock, and the probability that a main shock will occur in the first hour decreases with elapsed time from the occurrence of the possible foreshock by approximately the inverse of time. Thus, the occurrence of an earthquake of M greater than equivalent to 3. 0 in southern California increases the earthquake hazard within a small space-time window several orders of magnitude above the normal background level.

  14. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  15. USGS Tweet Earthquake Dispatch (@USGSted): Using Twitter for Earthquake Detection and Characterization

    NASA Astrophysics Data System (ADS)

    Liu, S. B.; Bouchard, B.; Bowden, D. C.; Guy, M.; Earle, P.

    2012-12-01

    The U.S. Geological Survey (USGS) is investigating how online social networking services like Twitter—a microblogging service for sending and reading public text-based messages of up to 140 characters—can augment USGS earthquake response products and the delivery of hazard information. The USGS Tweet Earthquake Dispatch (TED) system is using Twitter not only to broadcast seismically-verified earthquake alerts via the @USGSted and @USGSbigquakes Twitter accounts, but also to rapidly detect widely felt seismic events through a real-time detection system. The detector algorithm scans for significant increases in tweets containing the word "earthquake" or its equivalent in other languages and sends internal alerts with the detection time, tweet text, and the location of the city where most of the tweets originated. It has been running in real-time for 7 months and finds, on average, two or three felt events per day with a false detection rate of less than 10%. The detections have reasonable coverage of populated areas globally. The number of detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The main benefit of the tweet-based detections is speed, with most detections occurring between 19 seconds and 2 minutes from the origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. Going beyond the initial detection, the USGS is developing data mining techniques to continuously archive and analyze relevant tweets for additional details about the detected events. The information generated about an event is displayed on a web-based map designed using HTML5 for the mobile environment, which can be valuable when the user is not able to access a

  16. A moment in time: emergency nurses and the Canterbury earthquakes.

    PubMed

    Richardson, S; Ardagh, M; Grainger, P; Robinson, V

    2013-06-01

    To outline the impact of the Canterbury, New Zealand (NZ) earthquakes on Christchurch Hospital, and the experiences of emergency nurses during this time. NZ has experienced earthquakes and aftershocks centred in the Canterbury region of the South Island. The location of these, around and within the major city of Christchurch, was unexpected and associated with previously unknown fault lines. While the highest magnitude quake occurred in September 2010, registering 7.1 on the Richter scale, it was the magnitude 6.3 event on 22 February 2011 which was associated with the greatest injury burden and loss of life. Staff working in the only emergency department in the city were faced with an external emergency while also being directly affected as part of the disaster. SOURCES OF EVIDENCE: This paper developed following interviews with nurses who worked during this period, and draws on literature related to healthcare responses to earthquakes and natural disasters. The establishment of an injury database allowed for an accurate picture to emerge of the injury burden, and each of the authors was present and worked in a clinical capacity during the earthquake. Nurses played a significant role in the response to the earthquakes and its aftermath. However, little is known regarding the impact of this, either in personal or professional terms. This paper presents an overview of the earthquakes and experiences of nurses working during this time, identifying a range of issues that will benefit from further exploration and research. It seeks to provide a sense of the experiences and the potential meanings that were derived from being part of this 'moment in time'. Examples of innovations in practice emerged during the earthquake response and a number of recommendations for nursing practice are identified. © 2013 The Authors. International Nursing Review © 2013 International Council of Nurses.

  17. Discussion of New Approaches to Medium-Short-Term Earthquake Forecast in Practice of The Earthquake Prediction in Yunnan

    NASA Astrophysics Data System (ADS)

    Hong, F.

    2017-12-01

    After retrospection of years of practice of the earthquake prediction in Yunnan area, it is widely considered that the fixed-point earthquake precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of earthquakes, however it is difficult to obtain the spatial relevance between earthquakes and precursory anomalies, thus we can hardly predict the spatial locations of earthquakes using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predicting earthquakes locations, resulting from the increased seismicity were observed before 80% M=6.0 earthquakes in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predicting earthquakes locations in recent year, for instance, the forecasted earthquakes occurring time and area derived form the 1-year-scale geomagnetic anomalies before the M6.5 Ludian earthquake in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term earthquake forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small earthquakes.

  18. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain

  19. Long-term changes in regular and low-frequency earthquake inter-event times near Parkfield, CA

    NASA Astrophysics Data System (ADS)

    Wu, C.; Shelly, D. R.; Johnson, P. A.; Gomberg, J. S.; Peng, Z.

    2012-12-01

    The temporal evolution of earthquake inter-event time may provide important clues for the timing of future events and underlying physical mechanisms of earthquake nucleation. In this study, we examine inter-event times from 12-yr catalogs of ~50,000 earthquakes and ~730,000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault. We focus on the long-term evolution of inter-event times after the 2003 Mw6.5 San Simeon and 2004 Mw6.0 Parkfield earthquakes. We find that inter-event times decrease by ~4 orders of magnitudes after the Parkfield and San Simeon earthquakes and are followed by a long-term recovery with time scales of ~3 years and more than 8 years for earthquakes along and to the southwest of the San Andreas fault, respectively. The differing long-term recovery of the earthquake inter-event times is likely a manifestation of different aftershock recovery time scales that reflect the different tectonic loading rates in the two regions. We also observe a possible decrease of LFE inter-event times in some LFE families, followed by a recovery with time scales of ~4 months to several years. The drop in the recurrence time of LFE after the Parkfield earthquake is likely caused by a combination of the dynamic and positive static stress induced by the Parkfield earthquake, and the long-term recovery in LFE recurrence time could be due to post-seismic relaxation or gradual recovery of the fault zone material properties. Our on-going work includes better constraining and understanding the physical mechanisms responsible for the observed long-term recovery in earthquake and LFE inter-event times.

  20. Triggering of repeating earthquakes in central California

    USGS Publications Warehouse

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul

    2014-01-01

    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  1. End-User Applications of Real-Time Earthquake Information in Europe

    NASA Astrophysics Data System (ADS)

    Cua, G. B.; Gasparini, P.; Giardini, D.; Zschau, J.; Filangieri, A. R.; Reakt Wp7 Team

    2011-12-01

    The primary objective of European FP7 project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction) is to improve the efficiency of real-time earthquake risk mitigation methods and their capability of protecting structures, infrastructures, and populations. REAKT aims to address the issues of real-time earthquake hazard and response from end-to-end, with efforts directed along the full spectrum of methodology development in earthquake forecasting, earthquake early warning, and real-time vulnerability systems, through optimal decision-making, and engagement and cooperation of scientists and end users for the establishment of best practices for use of real-time information. Twelve strategic test cases/end users throughout Europe have been selected. This diverse group of applications/end users includes civil protection authorities, railway systems, hospitals, schools, industrial complexes, nuclear plants, lifeline systems, national seismic networks, and critical structures. The scale of target applications covers a wide range, from two school complexes in Naples, to individual critical structures, such as the Rion Antirion bridge in Patras, and the Fatih Sultan Mehmet bridge in Istanbul, to large complexes, such as the SINES industrial complex in Portugal and the Thessaloniki port area, to distributed lifeline and transportation networks and nuclear plants. Some end-users are interested in in-depth feasibility studies for use of real-time information and development of rapid response plans, while others intend to install real-time instrumentation and develop customized automated control systems. From the onset, REAKT scientists and end-users will work together on concept development and initial implementation efforts using the data products and decision-making methodologies developed with the goal of improving end-user risk mitigation. The aim of this scientific/end-user partnership is to ensure that scientific efforts are applicable to operational

  2. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  3. Pore-fluid migration and the timing of the 2005 M8.7 Nias earthquake

    USGS Publications Warehouse

    Hughes, K.L.H.; Masterlark, Timothy; Mooney, W.D.

    2011-01-01

    Two great earthquakes have occurred recently along the Sunda Trench, the 2004 M9.2 Sumatra-Andaman earthquake and the 2005 M8.7 Nias earthquake. These earthquakes ruptured over 1600 km of adjacent crust within 3 mo of each other. We quantitatively present poroelastic deformation analyses suggesting that postseismic fluid flow and recovery induced by the Sumatra-Andaman earthquake advanced the timing of the Nias earthquake. Simple back-slip simulations indicate that the megapascal (MPa)-scale pore-pressure recovery is equivalent to 7 yr of interseismic Coulomb stress accumulation near the Nias earthquake hypocenter, implying that pore-pressure recovery of the Sumatra-Andaman earthquake advanced the timing of the Nias earthquake by ~7 yr. That is, in the absence of postseismic pore-pressure recovery, we predict that the Nias earthquake would have occurred in 2011 instead of 2005. ?? 2011 Geological Society of America.

  4. Surviving collapsed structure entrapment after earthquakes: a "time-to-rescue" analysis.

    PubMed

    Macintyre, Anthony G; Barbera, Joseph A; Smith, Edward R

    2006-01-01

    Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. Commonly, this spurs resource intensive, dangerous, and frustrating attempts to find and extricate live victims. The search and rescue phase usually is maintained for many days beyond the last "save," potentially diverting critical attention and resources away from the pressing needs of non-trapped survivors and the devastated community. This recurring phenomenon is driven by the often-unanswered question "Can anyone still be alive under there?" The maximum survival time in entrapment is an important issue for responders, yet little formal research has been conducted on this issue. Knowing the maximum survival time in entrapment helps responders: (1) decide whether or not they should continue to assign limited resources to search and rescue activities; (2) assess the safety risks versus the benefits; (3) determine when search and rescue activities no longer are indicated; and (4) time and pace the important transition to community recovery efforts. The time period of 1985-2004 was selected for investigation. Medline and Lexis-Nexis databases were searched for earthquake events that occurred within this timeframe. Medical literature articles providing time-torescue data for victims of earthquakes were identified. Lexis-Nexis reports were scanned to select those with time-to-rescue data for victims of earthquakes. Reports from both databases were examined for information that might contribute to prolonged survival of entrapped individuals. A total of 34 different earthquake events met study criteria. Forty-eight medical articles containing time-to-rescue data were identified. Of these, the longest time to rescue was "13-19 days" post-event (secondhand data and the author is not specific). The second longest time to rescue in the medical articles was 8.7 days (209 hours). Twenty-five medical articles report multiple rescues that occurred after two days

  5. A Bayesian Approach to Real-Time Earthquake Phase Association

    NASA Astrophysics Data System (ADS)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  6. Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide

    NASA Astrophysics Data System (ADS)

    Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.

    2017-12-01

    GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.

  7. Earthquake Education in Prime Time

    NASA Astrophysics Data System (ADS)

    de Groot, R.; Abbott, P.; Benthien, M.

    2004-12-01

    Since 2001, the Southern California Earthquake Center (SCEC) has collaborated on several video production projects that feature important topics related to earthquake science, engineering, and preparedness. These projects have also fostered many fruitful and sustained partnerships with a variety of organizations that have a stake in hazard education and preparedness. The Seismic Sleuths educational video first appeared in the spring season 2001 on Discovery Channel's Assignment Discovery. Seismic Sleuths is based on a highly successful curriculum package developed jointly by the American Geophysical Union and The Department of Homeland Security Federal Emergency Management Agency. The California Earthquake Authority (CEA) and the Institute for Business and Home Safety supported the video project. Summer Productions, a company with a reputation for quality science programming, produced the Seismic Sleuths program in close partnership with scientists, engineers, and preparedness experts. The program has aired on the National Geographic Channel as recently as Fall 2004. Currently, SCEC is collaborating with Pat Abbott, a geology professor at San Diego State University (SDSU) on the video project Written In Stone: Earthquake Country - Los Angeles. Partners on this project include the California Seismic Safety Commission, SDSU, SCEC, CEA, and the Insurance Information Network of California. This video incorporates live-action demonstrations, vivid animations, and a compelling host (Abbott) to tell the story about earthquakes in the Los Angeles region. The Written in Stone team has also developed a comprehensive educator package that includes the video, maps, lesson plans, and other supporting materials. We will present the process that facilitates the creation of visually effective, factually accurate, and entertaining video programs. We acknowledge the need to have a broad understanding of the literature related to communication, media studies, science education, and

  8. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  9. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  10. Is Your Class a Natural Disaster? It can be... The Real Time Earthquake Education (RTEE) System

    NASA Astrophysics Data System (ADS)

    Whitlock, J. S.; Furlong, K.

    2003-12-01

    In cooperation with the U.S. Geological Survey (USGS) and its National Earthquake Information Center (NEIC) in Golden, Colorado, we have implemented an autonomous version of the NEIC's real-time earthquake database management and earthquake alert system (Earthworm). This is the same system used professionally by the USGS in its earthquake response operations. Utilizing this system, Penn State University students participating in natural hazard classes receive real-time alerts of worldwide earthquake events on cell phones distributed to the class. The students are then responsible for reacting to actual earthquake events, in real-time, with the same data (or lack thereof) as earthquake professionals. The project was first implemented in Spring 2002, and although it had an initial high intrigue and "coolness" factor, the interest of the students waned with time. Through student feedback, we observed that scientific data presented on its own without an educational context does not foster student learning. In order to maximize the impact of real-time data and the accompanying e-media, the students need to become personally involved. Therefore, in collaboration with the Incorporated Research Institutes of Seismology (IRIS), we have begun to develop an online infrastructure that will help teachers and faculty effectively use real-time earthquake information. The Real-Time Earthquake Education (RTEE) website promotes student learning by integrating inquiry-based education modules with real-time earthquake data. The first module guides the students through an exploration of real-time and historic earthquake datasets to model the most important criteria for determining the potential impact of an earthquake. Having provided the students with content knowledge in the first module, the second module presents a more authentic, open-ended educational experience by setting up an earthquake role-play situation. Through the Earthworm system, we have the ability to "set off

  11. Do weak global stresses synchronize earthquakes?

    NASA Astrophysics Data System (ADS)

    Bendick, R.; Bilham, R.

    2017-08-01

    Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.

  12. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    NASA Astrophysics Data System (ADS)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  13. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would

  14. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  15. After an Earthquake: Accessing Near Real-Time Data in the Classroom

    NASA Astrophysics Data System (ADS)

    Bravo, T. K.; Coleman, B.; Hubenthal, M.; Owens, T. J.; Taber, J.; Welti, R.; Weertman, B. R.

    2010-12-01

    One of the best ways to engage students in scientific content is to give them opportunities to work with real scientific instruments and data and enable them to experience the discovery of scientific information. In addition, newsworthy earthquakes can capture the attention and imagination of students. IRIS and collaborating partners provide a range of options to leverage that attention through access to near-real-time earthquake location and waveform data stored in the IRIS Data Management System and elsewhere via a number of web-based tools and a new Java-based application. The broadest audience is reached by the Seismic Monitor, a simple Web-based tool for observing near-real-time seismicity. The IRIS Earthquake Browser (IEB) allows users to explore recent and cataloged earthquakes and aftershock patterns online with more flexibility, and K-12 classroom activities for understanding plate tectonics and estimating seismic hazards have been designed around its use. Waveforms are easily viewed and explored on the web using the Rapid Earthquake Viewer (REV), developed by the University of South Carolina in collaboration with IRIS E&O. Data from recent well-known earthquakes available via REV are used in exercises to determine Earth’s internal structure and to locate earthquakes. Three component data is presented to the students, allowing a much more realistic analysis of the data than is presented in most textbooks. The Seismographs in Schools program uses real-time data in the classroom to interest and engage students about recent earthquakes. Through the IRIS website, schools can share event data and 24-hr images. Additionally, data is available in real-time via the API. This API allows anyone to extract data, re-purpose it, and display it however they need to, as is being done by the British Geological Survey Seismographs in Schools program. Over 350 schools throughout the US and internationally are currently registered with the IRIS Seismographs in Schools

  16. Statistical analysis of earthquakes after the 1999 MW 7.7 Chi-Chi, Taiwan, earthquake based on a modified Reasenberg-Jones model

    NASA Astrophysics Data System (ADS)

    Chen, Yuh-Ing; Huang, Chi-Shen; Liu, Jann-Yenq

    2015-12-01

    We investigated the temporal-spatial hazard of the earthquakes after the 1999 September 21 MW = 7.7 Chi-Chi shock in a continental region of Taiwan. The Reasenberg-Jones (RJ) model (Reasenberg and Jones, 1989, 1994) that combines the frequency-magnitude distribution (Gutenberg and Richter, 1944) and time-decaying occurrence rate (Utsu et al., 1995) is conventionally employed for assessing the earthquake hazard after a large shock. However, it is found that the b values in the frequency-magnitude distribution of the earthquakes in the study region dramatically decreased from background values after the Chi-Chi shock, and then gradually increased up. The observation of a time-dependent frequency-magnitude distribution motivated us to propose a modified RJ model (MRJ) to assess the earthquake hazard. To see how the models perform on assessing short-term earthquake hazard, the RJ and MRJ models were separately used to sequentially forecast earthquakes in the study region. To depict the potential rupture area for future earthquakes, we further constructed relative hazard (RH) maps based on the two models. The Receiver Operating Characteristics (ROC) curves (Swets, 1988) finally demonstrated that the RH map based on the MRJ model was, in general, superior to the one based on the original RJ model for exploring the spatial hazard of earthquakes in a short time after the Chi-Chi shock.

  17. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  18. Interevent times in a new alarm-based earthquake forecasting model

    NASA Astrophysics Data System (ADS)

    Talbi, Abdelhak; Nanjo, Kazuyoshi; Zhuang, Jiancang; Satake, Kenji; Hamdache, Mohamed

    2013-09-01

    This study introduces a new earthquake forecasting model that uses the moment ratio (MR) of the first to second order moments of earthquake interevent times as a precursory alarm index to forecast large earthquake events. This MR model is based on the idea that the MR is associated with anomalous long-term changes in background seismicity prior to large earthquake events. In a given region, the MR statistic is defined as the inverse of the index of dispersion or Fano factor, with MR values (or scores) providing a biased estimate of the relative regional frequency of background events, here termed the background fraction. To test the forecasting performance of this proposed MR model, a composite Japan-wide earthquake catalogue for the years between 679 and 2012 was compiled using the Japan Meteorological Agency catalogue for the period between 1923 and 2012, and the Utsu historical seismicity records between 679 and 1922. MR values were estimated by sampling interevent times from events with magnitude M ≥ 6 using an earthquake random sampling (ERS) algorithm developed during previous research. Three retrospective tests of M ≥ 7 target earthquakes were undertaken to evaluate the long-, intermediate- and short-term performance of MR forecasting, using mainly Molchan diagrams and optimal spatial maps obtained by minimizing forecasting error defined by miss and alarm rate addition. This testing indicates that the MR forecasting technique performs well at long-, intermediate- and short-term. The MR maps produced during long-term testing indicate significant alarm levels before 15 of the 18 shallow earthquakes within the testing region during the past two decades, with an alarm region covering about 20 per cent (alarm rate) of the testing region. The number of shallow events missed by forecasting was reduced by about 60 per cent after using the MR method instead of the relative intensity (RI) forecasting method. At short term, our model succeeded in forecasting the

  19. Earthquake Loss Estimates in Near Real-Time

    NASA Astrophysics Data System (ADS)

    Wyss, Max; Wang, Rongjiang; Zschau, Jochen; Xia, Ye

    2006-10-01

    The usefulness to rescue teams of nearreal-time loss estimates after major earthquakes is advancing rapidly. The difference in the quality of data available in highly developed compared with developing countries dictates that different approaches be used to maximize mitigation efforts. In developed countries, extensive information from tax and insurance records, together with accurate census figures, furnish detailed data on the fragility of buildings and on the number of people at risk. For example, these data are exploited by the method to estimate losses used in the Hazards U.S. Multi-Hazard (HAZUSMH)software program (http://www.fema.gov/plan/prevent/hazus/). However, in developing countries, the population at risk is estimated from inferior data sources and the fragility of the building stock often is derived empirically, using past disastrous earthquakes for calibration [Wyss, 2004].

  20. Teaching with Real-time Earthquake Data in jAmaSeis

    NASA Astrophysics Data System (ADS)

    Bravo, T. K.; Coleman, B.; Taber, J.

    2011-12-01

    Earthquakes can capture the attention of students and inspire them to explore the Earth. The Incorporated Research Institutions in Seismology (IRIS) and Moravian College are collaborating to develop cross-platform software (jAmaSeis) that enables students to access real-time earthquake waveform data. Users can record their own data from several different types of educational seismometers, and they can obtain data in real-time from other jAmaseis users nationwide. Additionally, the ability to stream data from the IRIS Data Management Center (DMC) is under development. Once real-time data is obtained, users of jAmaseis can study seismological concepts in the classroom. The user interface of the software is carefully designed to lead students through the steps to interrogate seismic data following a large earthquake. Users can process data to determine characteristics of seismograms such as time of occurrence, distance from the epicenter to the station, magnitude, and location (via triangulation). Along the way, the software provides graphical clues to assist student interpretations. In addition to the inherent pedagogical features of the software, IRIS provides pre-packaged data and instructional activities to help students learn the analysis steps. After using these activities, students can apply their skills to interpret seismic waves from their own real-time data.

  1. Time-Varying Upper-Plate Deformation during the Megathrust Subduction Earthquake Cycle

    NASA Astrophysics Data System (ADS)

    Furlong, Kevin P.; Govers, Rob; Herman, Matthew

    2015-04-01

    Over the past several decades of the WEGENER era, our abilities to observe and image the deformational behavior of the upper plate in megathrust subduction zones has dramatically improved. Several intriguing inferences can be made from these observations including apparent lateral variations in locking along subduction zones, which differs from interseismic to coseismic periods; the significant magnitude of post-earthquake deformation (e.g. following the 20U14 Mw Iquique, Chile earthquake, observed on-land GPS post-EQ displacements are comparable to the co-seismic displacements); and incompatibilities between rates of slip deficit accumulation and resulting earthquake co-seismic slip (e.g. pre-Tohoku, inferred rates of slip deficit accumulation on the megathrust significantly exceed slip amounts for the ~ 1000 year recurrence.) Modeling capabilities have grown from fitting simple elastic accumulation/rebound curves to sparse data to having spatially dense continuous time series that allow us to infer details of plate boundary coupling, rheology-driven transient deformation, and partitioning among inter-earthquake and co-seismic displacements. In this research we utilize a 2D numerical modeling to explore the time-varying deformational behavior of subduction zones during the earthquake cycle with an emphasis on upper-plate and plate interface behavior. We have used a simplified model configuration to isolate fundamental processes associated with the earthquake cycle, rather than attempting to fit details of specific megathrust zones. Using a simple subduction geometry, but realistic rheologic layering we are evaluating the time-varying displacement and stress response through a multi-earthquake cycle history. We use a simple model configuration - an elastic subducting slab, an elastic upper plate (shallower than 40 km), and a visco-elastic upper plate (deeper than 40 km). This configuration leads to an upper plate that acts as a deforming elastic beam at inter-earthquake

  2. Integrating Real-time Earthquakes into Natural Hazard Courses

    NASA Astrophysics Data System (ADS)

    Furlong, K. P.; Benz, H. M.; Whitlock, J. S.; Bittenbinder, A. N.; Bogaert, B. B.

    2001-12-01

    Natural hazard courses are playing an increasingly important role in college and university earth science curricula. Students' intrinsic curiosity about the subject and the potential to make the course relevant to the interests of both science and non-science students make natural hazards courses popular additions to a department's offerings. However, one vital aspect of "real-life" natural hazard management that has not translated well into the classroom is the real-time nature of both events and response. The lack of a way to entrain students into the event/response mode has made implementing such real-time activities into classroom activities problematic. Although a variety of web sites provide near real-time postings of natural hazards, students essentially learn of the event after the fact. This is particularly true for earthquakes and other events with few precursors. As a result, the "time factor" and personal responsibility associated with natural hazard response is lost to the students. We have integrated the real-time aspects of earthquake response into two natural hazard courses at Penn State (a 'general education' course for non-science majors, and an upper-level course for science majors) by implementing a modification of the USGS Earthworm system. The Earthworm Database Management System (E-DBMS) catalogs current global seismic activity. It provides earthquake professionals with real-time email/cell phone alerts of global seismic activity and access to the data for review/revision purposes. We have modified this system so that real-time response can be used to address specific scientific, policy, and social questions in our classes. As a prototype of using the E-DBMS in courses, we have established an Earthworm server at Penn State. This server receives national and global seismic network data and, in turn, transmits the tailored alerts to "on-duty" students (e-mail, pager/cell phone notification). These students are responsible to react to the alarm

  3. A common mode of origin of power laws in models of market and earthquake

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Pratip; Chatterjee, Arnab; Chakrabarti, Bikas K.

    2007-07-01

    We show that there is a common mode of origin for the power laws observed in two different models: (i) the Pareto law for the distribution of money among the agents with random-saving propensities in an ideal gas-like market model and (ii) the Gutenberg-Richter law for the distribution of overlaps in a fractal-overlap model for earthquakes. We find that the power laws appear as the asymptotic forms of ever-widening log-normal distributions for the agents’ money and the overlap magnitude, respectively. The identification of the generic origin of the power laws helps in better understanding and in developing generalized views of phenomena in such diverse areas as economics and geophysics.

  4. Near-real-time and scenario earthquake loss estimates for Mexico

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Zuñiga, R.

    2017-12-01

    The large earthquakes of 8 September 2017, M8.1, and 19 September 2017, M7.1 have focused attention on the dangers of Mexican seismicity. The near-real-time alerts by QLARM estimated 10 to 300 fatalities and 0 to 200 fatalities, respectively. At the time of this submission the reported death tolls are 96 and 226, respectively. These alerts were issued within 96 and 57 minutes of the occurrence times. For the M8.1 earthquake the losses due to a line model could be calculated. The line with length L=110 km extended from the initial epicenter to the NE, where the USGS had reported aftershocks. On September 19, no aftershocks were available in near-real-time, so a point source had to be used for the quick calculation of likely casualties. In both cases, the casualties were at least an order of magnitude smaller than what they could have been because on 8 September the source was relatively far offshore and on 19 September the hypocenter was relatively deep. The largest historic earthquake in Mexico occurred on 28 March 1787 and likely had a rupture length of 450 km and M8.6. Based on this event, and after verifying our tool for Mexico, we estimated the order of magnitude of a disaster, given the current population, in a maximum credible earthquake along the Pacific coast. In the countryside along the coast we expect approximately 27,000 fatalities and 480,000 injured. In the special case of Mexico City the casualties in a worst possible earthquake along the Pacific plate boundary would likely be counted as five digit numbers. The large agglomerate of the capital with its lake bed soil attracts most attention. Nevertheless, one should pay attention to the fact that the poor, rural segment of society, living in buildings of weak resistance to shaking, are likely to sustain a mortality rate about 20% larger than the population in cities on average soil.

  5. Evaluation of Real-Time and Off-Line Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland

    NASA Astrophysics Data System (ADS)

    Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas

    2013-04-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.

  6. Memory effect in M ≥ 7 earthquakes of Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2014-07-01

    The M ≥ 7 earthquakes that occurred in the Taiwan region during 1906-2006 are taken to study the possibility of memory effect existing in the sequence of those large earthquakes. Those events are all mainshocks. The fluctuation analysis technique is applied to analyze two sequences in terms of earthquake magnitude and inter-event time represented in the natural time domain. For both magnitude and inter-event time, the calculations are made for three data sets, i.e., the original order data, the reverse-order data, and that of the mean values. Calculated results show that the exponents of scaling law of fluctuation versus window length are less than 0.5 for the sequences of both magnitude and inter-event time data. In addition, the phase portraits of two sequent magnitudes and two sequent inter-event times are also applied to explore if large (or small) earthquakes are followed by large (or small) events. Results lead to a negative answer. Together with all types of information in study, we make a conclusion that the earthquake sequence in study is short-term corrected and thus the short-term memory effect would be operative.

  7. What to Expect from the Virtual Seismologist: Delay Times and Uncertainties of Initial Earthquake Alerts in California

    NASA Astrophysics Data System (ADS)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Racine, R.; Meier, M.; Cauzzi, C.

    2013-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland, western Greece and Istanbul. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, Romania, and Iceland are planned or underway. The possible use cases for an EEW system will be determined by the speed and reliability of earthquake source parameter estimates. A thorough understanding of both is therefore essential to evaluate the usefulness of VS. For California, we present state-wide theoretical alert times for hypothetical earthquakes by analyzing time delays introduced by the different components in the VS EEW system. Taking advantage of the fully probabilistic formulation of the VS algorithm we further present an improved way to describe the uncertainties of every magnitude estimate by evaluating the width and shape of the probability density function that describes the relationship between waveform envelope amplitudes and magnitude. We evaluate these new uncertainty values for past seismicity in California through off-line playbacks and compare them to the previously defined static definitions of uncertainty based on real-time detections. Our results indicate where VS alerts are most useful in California and also suggest where most effective improvements to the VS EEW system

  8. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  9. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    NASA Astrophysics Data System (ADS)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  10. Victims' time discounting 2.5 years after the Wenchuan earthquake: an ERP study.

    PubMed

    Li, Jin-Zhen; Gui, Dan-Yang; Feng, Chun-Liang; Wang, Wen-Zhong; Du, Bo-Qi; Gan, Tian; Luo, Yue-Jia

    2012-01-01

    Time discounting refers to the fact that the subjective value of a reward decreases as the delay until its occurrence increases. The present study investigated how time discounting has been affected in survivors of the magnitude-8.0 Wenchuan earthquake that occurred in China in 2008. Nineteen earthquake survivors and 22 controls, all school teachers, participated in the study. Event-related brain potentials (ERPs) for time discounting tasks involving gains and losses were acquired in both the victims and controls. The behavioral data replicated our previous findings that delayed gains were discounted more steeply after a disaster. ERP results revealed that the P200 and P300 amplitudes were increased in earthquake survivors. There was a significant group (earthquake vs. non-earthquake) × task (gain vs. loss) interaction for the N300 amplitude, with a marginally significantly reduced N300 for gain tasks in the experimental group, which may suggest a deficiency in inhibitory control for gains among victims. The results suggest that post-disaster decisions might involve more emotional (System 1) and less rational thinking (System 2) in terms of a dual-process model of decision making. The implications for post-disaster intervention and management are also discussed.

  11. Luminous phenomena and electromagnetic VHF wave emission originated from earthquake-related radon exhalation

    NASA Astrophysics Data System (ADS)

    Seki, A.; Tobo, I.; Omori, Y.; Muto, J.; Nagahama, H.

    2013-12-01

    Anomalous luminous phenomena and electromagnetic wave emission before or during earthquakes have been reported (e.g., the 1965 Matsushiro earthquake swarm). However, their mechanism is still unsolved, in spite of many models for these phenomena. Here, we propose a new model about luminous phenomena and electromagnetic wave emission during earthquake by focusing on atmospheric radon (Rn-222) and its daughter nuclides (Po-218 and Po-214). Rn-222, Po-218 and Po-214 are alpha emitters, and these alpha particles ionize atmospheric molecules. A light emission phenomenon, called 'the air luminescence', is caused by de-excitation of the ionized molecules of atmospheric nitrogen due to electron impact ionization from alpha particles. The de-excitation is from the second positive system of neutral nitrogen molecules and the first negative system of nitrogen molecule ion. Wavelengths of lights by these transitions include the visible light wavelength. So based on this mechanism, we proposed a new luminous phenomenon model before or during earthquake: 1. The concentration of atmospheric radon and its daughter nuclides increase anomalously before or during earthquakes, 2. Nitrogen molecules and their ions are excited by alpha particles emitted from Rn-222, Po-218 and Po-214, and air luminescence is generated by their de-excitation. Similarly, electromagnetic VHF wave emission can be explained by ionizing effect of radon and its daughter nuclides. Boyarchuk et al. (2005) proposed a model that electromagnetic VHF wave emission is originated when excited state of neutral clusters changes. Radon gas ionizes atmosphere and forms positively and negatively charged heavy particles. The process of ion hydration in ordinary air can be determined by the formation of complex chemically active structures of the various types of ion radicals. As a result of the association of such hydration radical ions, a neutral cluster, which is dipole quasi-molecules, is formed. A neutral cluster

  12. Automated Determination of Magnitude and Source Length of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.

    2017-12-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  13. Automated Determination of Magnitude and Source Extent of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun

    2017-04-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  14. Export Time of Earthquake-Derived Landslides in Active Mountain Ranges

    NASA Astrophysics Data System (ADS)

    Croissant, T.; Lague, D.; Steer, P.; Davy, P.

    2016-12-01

    In active mountain ranges, large earthquakes (Mw > 5-6) trigger numerous landslides that impact river dynamics. These landslides bring local and sudden sediment deposits which are eroded and transported along the river network, causing downstream changes in river geometry, transport capacity and erosion efficiency. The progressive removal of landslide materials has implications for downstream hazards management and for landscape dynamics at the timescale of the seismic cycle. Although the export time of suspended sediments from landslides triggered by large-magnitude earthquakes has been extensively studied, the processes and time scales associated to bedload transport remains poorly studied. Here, we study the sediment export of large landslides with the 2D morphodynamic model, Eros. This model combines: (i) an hydrodynamic model, (ii) a sediment transport and deposition model and (iii) a lateral erosion model. Eros is particularly well suited for this issue as it accounts for the complex retro-actions between sediment transport and fluvial geometry for rivers submitted to external forcings such as abrupt sediment supply increase. Using a simplified synthetic topography we systematically study the influence of pulse volume (Vs) and channel transport capacity (QT) on the export time of landslides. The range of simulated river behavior includes landslide vertical incision, its subsequent removal by lateral erosion and the river morphology modifications induced by downstream sediment propagation. The morphodynamic adaptation of the river increases its transport capacity along the channel and tends to accelerate the landslide evacuation. Our results highlight two regimes: (i) the export time is linearly related to Vs/QT when the sediment pulse introduced in the river does not affect significantly the river hydrodynamic (low Vs/QT) and (ii) the export time is a non-linear function of Vs/QT when the pulse undergoes significant morphodynamic modifications during its

  15. Possible relationship between Seismic Electric Signals (SES) lead time and earthquake stress drop

    PubMed Central

    DOLOGLOU, Elizabeth

    2008-01-01

    Stress drop values for fourteen large earthquakes with MW ≥ 5.4 which occurred in Greece during the period 1983–2007 are available. All these earthquakes were preceded by Seismic Electric Signals (SES). An attempt has been made to investigate possible correlation between their stress drop values and the corresponding SES lead times. For the stress drop, we considered the Brune stress drop, ΔσB, estimated from far field body wave displacement source spectra and ΔσSB derived from the strong motion acceleration response spectra. The results show a relation may exist between Brune stress drop, ΔσB, and lead time which implies that earthquakes with higher stress drop values are preceded by SES with shorter lead time. PMID:18941291

  16. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  17. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  18. JPL's GNSS Real-Time Earthquake and Tsunami (GREAT) Alert System

    NASA Astrophysics Data System (ADS)

    Bar-Sever, Yoaz; Miller, Mark; Vallisneri, Michele; Khachikyan, Robert; Meyer, Robert

    2017-04-01

    We describe recent developments to the GREAT Alert natural hazard monitoring service from JPL's Global Differential GPS (GDGPS) System. GREAT Alert provides real-time, 1 Hz positioning solutions for hundreds of GNSS tracking sites, from both global and regional networks, aiming to monitor ground motion in the immediate aftermath of earthquakes. We take advantage of the centralized data processing, which is collocated with the GNSS orbit determination operations of the GDGPS System, to combine orbit determination with large-scale point-positioning in a grand estimation scheme, and as a result realize significant improvement to the positioning accuracy compared to conventional stand-alone point positioning techniques. For example, the measured median site (over all sites) real-time horizontal positioning accuracy is 2 cm 1DRMS, and the median real-time vertical accuracy is 4 cm RMS. The GREAT Alert positioning service is integrated with automated global earthquake notices from the United States Geodetic Survey (USGS) to support near-real-time calculations of co-seismic displacements with attendant formal errors based both short-term and long-term error analysis for each individual site. We will show the millimeter-level resolution of co-seismic displacement can be achieved by this system. The co-seismic displacements, in turn, are fed into a JPL geodynamics and ocean models, that estimate the Earthquake magnitude and predict the potential tsunami scale.

  19. Earthquakes in the Central United States, 1699-2010

    USGS Publications Warehouse

    Dart, Richard L.; Volpi, Christina M.

    2010-01-01

    This publication is an update of an earlier report, U.S. Geological Survey (USGS) Geologic Investigation I-2812 by Wheeler and others (2003), titled ?Earthquakes in the Central United States-1699-2002.? Like the original poster, the center of the updated poster is a map showing the pattern of earthquake locations in the most seismically active part of the central United States. Arrayed around the map are short explanatory texts and graphics, which describe the distribution of historical earthquakes and the effects of the most notable of them. The updated poster contains additional, post 2002, earthquake data. These are 38 earthquakes covering the time interval from January 2003 to June 2010, including the Mount Carmel, Illinois, earthquake of 2008. The USGS Preliminary Determination of Epicenters (PDE) was the source of these additional data. Like the I-2812 poster, this poster was prepared for a nontechnical audience and designed to inform the general public as to the widespread occurrence of felt and damaging earthquakes in the Central United States. Accordingly, the poster should not be used to assess earthquake hazard in small areas or at individual locations.

  20. Real-time forecasts of tomorrow's earthquakes in California: a new mapping tool

    USGS Publications Warehouse

    Gerstenberger, Matt; Wiemer, Stefan; Jones, Lucy

    2004-01-01

    We have derived a multi-model approach to calculate time-dependent earthquake hazard resulting from earthquake clustering. This file report explains the theoretical background behind the approach, the specific details that are used in applying the method to California, as well as the statistical testing to validate the technique. We have implemented our algorithm as a real-time tool that has been automatically generating short-term hazard maps for California since May of 2002, at http://step.wr.usgs.gov

  1. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  2. A Real-Time Earthquake Precursor Detection Technique Using TEC from a GPS Network

    NASA Astrophysics Data System (ADS)

    Alp Akyol, Ali; Arikan, Feza; Arikan, Orhan

    2016-07-01

    Anomalies have been observed in the ionospheric electron density distribution prior to strong earthquakes. However, most of the reported results are obtained by earthquake analysis. Therefore, their implementation in practice is highly problematic. Recently, a novel earthquake precursor detection technique based on spatio-temporal analysis of Total Electron Content (TEC) data obtained from Turkish National Permanent GPS Network (TNPGN) is developed by IONOLAB group (www.ionolab.org). In the present study, the developed detection technique is implemented in a causal setup over the available data set in test phase that enables the real time implementation. The performance of the developed earthquake prediction technique is evaluated by using 10 fold cross validation over the data obtained in 2011. Among the 23 earthquakes that have magnitudes higher than 5, the developed technique can detect precursors of 14 earthquakes while producing 8 false alarms. This study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  3. Real-time earthquake shake, damage, and loss mapping for Istanbul metropolitan area

    NASA Astrophysics Data System (ADS)

    Zülfikar, A. Can; Fercan, N. Özge Zülfikar; Tunç, Süleyman; Erdik, Mustafa

    2017-01-01

    The past devastating earthquakes in densely populated urban centers, such as the 1994 Northridge; 1995 Kobe; 1999 series of Kocaeli, Düzce, and Athens; and 2011 Van-Erciş events, showed that substantial social and economic losses can be expected. Previous studies indicate that inadequate emergency response can increase the number of casualties by a maximum factor of 10, which suggests the need for research on rapid earthquake shaking damage and loss estimation. The reduction in casualties in urban areas immediately following an earthquake can be improved if the location and severity of damages can be rapidly assessed by information from rapid response systems. In this context, a research project (TUBITAK-109M734) titled "Real-time Information of Earthquake Shaking, Damage, and Losses for Target Cities of Thessaloniki and Istanbul" was conducted during 2011-2014 to establish the rapid estimation of ground motion shaking and related earthquake damages and casualties for the target cities. In the present study, application to Istanbul metropolitan area is presented. In order to fulfill this objective, earthquake hazard and risk assessment methodology known as Earthquake Loss Estimation Routine, which was developed for the Euro-Mediterranean region within the Network of Research Infrastructures for European Seismology EC-FP6 project, was used. The current application to the Istanbul metropolitan area provides real-time ground motion information obtained by strong motion stations distributed throughout the densely populated areas of the city. According to this ground motion information, building damage estimation is computed by using grid-based building inventory, and the related loss is then estimated. Through this application, the rapidly estimated information enables public and private emergency management authorities to take action and allocate and prioritize resources to minimize the casualties in urban areas during immediate post-earthquake periods. Moreover, it

  4. Earthquake Declustering via a Nearest-Neighbor Approach in Space-Time-Magnitude Domain

    NASA Astrophysics Data System (ADS)

    Zaliapin, I. V.; Ben-Zion, Y.

    2016-12-01

    We propose a new method for earthquake declustering based on nearest-neighbor analysis of earthquakes in space-time-magnitude domain. The nearest-neighbor approach was recently applied to a variety of seismological problems that validate the general utility of the technique and reveal the existence of several different robust types of earthquake clusters. Notably, it was demonstrated that clustering associated with the largest earthquakes is statistically different from that of small-to-medium events. In particular, the characteristic bimodality of the nearest-neighbor distances that helps separating clustered and background events is often violated after the largest earthquakes in their vicinity, which is dominated by triggered events. This prevents using a simple threshold between the two modes of the nearest-neighbor distance distribution for declustering. The current study resolves this problem hence extending the nearest-neighbor approach to the problem of earthquake declustering. The proposed technique is applied to seismicity of different areas in California (San Jacinto, Coso, Salton Sea, Parkfield, Ventura, Mojave, etc.), as well as to the global seismicity, to demonstrate its stability and efficiency in treating various clustering types. The results are compared with those of alternative declustering methods.

  5. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to

  6. Earthquake early warning for Romania - most recent improvements

    NASA Astrophysics Data System (ADS)

    Marmureanu, Alexandru; Elia, Luca; Martino, Claudio; Colombelli, Simona; Zollo, Aldo; Cioflan, Carmen; Toader, Victorin; Marmureanu, Gheorghe; Marius Craiu, George; Ionescu, Constantin

    2014-05-01

    EWS for Vrancea earthquakes uses the time interval (28-32 sec.) between the moment when the earthquake is detected by the local seismic network installed in the epicenter area (Vrancea) and the arrival time of the seismic waves in the protected area (Bucharest) to send earthquake warning to users. In the last years, National Institute for Earth Physics (NIEP) upgraded its seismic network in order to cover better the seismic zones of Romania. Currently the National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T,STS2, SH-1, S13, Ranger, gs21, Mark l22) and acceleration sensors (Episensor). Recent improvement of the seismic network and real-time communication technologies allows implementation of a nation-wide EEWS for Vrancea and other seismic sources from Romania. We present a regional approach to Earthquake Early Warning for Romania earthquakes. The regional approach is based on PRESTo (Probabilistic and Evolutionary early warning SysTem) software platform: PRESTo processes in real-time three channel acceleration data streams: once the P-waves arrival have been detected, it provides earthquake location and magnitude estimations, and peak ground motion predictions at target sites. PRESTo is currently implemented in real- time at National Institute for Earth Physics, Bucharest for several months in parallel with a secondary EEWS. The alert notification is issued only when both systems validate each other. Here we present the results obtained using offline earthquakes originating from Vrancea area together with several real-time

  7. Near-real-time Earthquake Notification and Response in the Classroom: Exploiting the Teachable Moment

    NASA Astrophysics Data System (ADS)

    Furlong, K. P.; Whitlock, J. S.; Benz, H. M.

    2002-12-01

    Earthquakes occur globally, on a regular but (as yet) non-predictable basis, and their effects are both dramatic and often devastating. Additionally they serve as a primary tool to image the earth and define the active processes that drive tectonics. As a result, earthquakes can be an extremely effective tool for helping students to learn about active earth processes, natural hazards, and the myriad of issues that arise with non-predictable but potentially devastating natural events. We have developed and implemented a real-time earthquake alert system (EAS) built on the USGS Earthworm system to bring earthquakes into the classroom. Through our EAS, students in our General Education class on Natural Hazards (Earth101 - Natural Disasters: Hollywood vs. Reality) participate in earthquake response activities in ways similar to earthquake hazard professionals - they become part of the response to the event. Our implementation of the Earthworm system allows our students to be paged via cell-phone text messaging (Yes, we provide cell phones to the 'duty seismologists'), and they respond to those pages as appropriate for their role. A parallel web server is maintained that provides the earthquake details (location maps, waveforms etc.) and students produce time-critical output such as news releases, analyses of earthquake trends in the region, and reports detailing implications of the events. Since this is a course targeted at non-science majors, we encourage that they bring their own expertise into the analyses. For example, business of economic majors may investigate the economic impacts of an earthquake, secondary education majors may work on teaching modules based on the information they gather etc. Since the students know that they are responding to real events they develop ownership of the information they gather and they recognize the value of real-time response. Our educational goals in developing this system include: (1) helping students develop a sense of the

  8. Predictability of Landslide Timing From Quasi-Periodic Precursory Earthquakes

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.

    2018-02-01

    Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.

  9. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2006

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Searcy, Cheryl

    2008-01-01

    Between January 1 and December 31, 2006, AVO located 8,666 earthquakes of which 7,783 occurred on or near the 33 volcanoes monitored within Alaska. Monitoring highlights in 2006 include: an eruption of Augustine Volcano, a volcanic-tectonic earthquake swarm at Mount Martin, elevated seismicity and volcanic unrest at Fourpeaked Mountain, and elevated seismicity and low-level tremor at Mount Veniaminof and Korovin Volcano. A new seismic subnetwork was installed on Fourpeaked Mountain. This catalog includes: (1) descriptions and locations of seismic instrumentation deployed in the field during 2006, (2) a description of earthquake detection, recording, analysis, and data archival systems, (3) a description of seismic velocity models used for earthquake locations, (4) a summary of earthquakes located in 2006, and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, location quality statistics, daily station usage statistics, and all files used to determine the earthquake locations in 2006.

  10. An Updated Catalog of Taiwan Earthquakes (1900-2011) with Homogenized Mw Magnitudes

    NASA Astrophysics Data System (ADS)

    Chen, K.; Tsai, Y.; Chang, W.

    2012-12-01

    A complete and consistent catalog of earthquakes can provide good data for studying the distribution of earthquakes in a region as function of space, time and magnitude. Therefore, it is a basic tool for studying seismic hazard and mitigating hazard, and we can get the seismicity with magnitude equal to or greater than Mw from the data set. In the article for completeness and consistence, we apply a catalog of earthquakes from 1900 to 2006 with homogenized magnitude (Mw) (Chen and Tsai, 2008) as a base, and we also refer to the Hsu (1989) to incorporate available supplementary data (total 188 data) for the period 1900-1935, the supplementary data lead the cutoff threshold magnitude to be from Mw 5.5 down to 5.0, this indicates that we add the additional data has enriched the magnitude > 5.0 content. For this study, the catalog has been updated to include earthquakes up to 2011, and it is complete for Mw > 5.0, this will increase the reliability for studying seismic hazard. It is found that it is saturated for original catalog of Taiwan earthquakes compared with Harvard Mw or USGS M for magnitude > 6.5. Although, we modified the original catalog into seismic moment magnitude Mw, it still does not overcome the drawback. But, it is found for Mw < 6.5, our unified Mw are most greater than Harvard Mw or USGS M, the phenomenon indicates our unified Mw to supplement the gap above magnitude > 6.0 and somewhere magnitude > 5.5 during the time period 1973-1991 for original catalog. Therefore, it is better with Mw to report the earthquake magnitude.

  11. Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Hansen, R. A.

    2007-05-01

    The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.

  12. Statistical Evaluation of Turkey Earthquake Cataloque: A Case study (1900-2015)

    NASA Astrophysics Data System (ADS)

    Kalafat, Dogan

    2016-04-01

    In this study, Turkey earthquake catalog of the events within the time period of 1900-2015 prepared by Boǧaziçi University Kandilli Observatory and Earthquake Research Institute is analyzed. The catalog consists of earthquakes occurred in Turkey and surrounding area (32o-45oN/23o-48oE). The current earthquake catalog data has been checked in two aspects; the time dependent variation and compliance for different regions. Specifically the data set prior to 1976 was found deficient. In total, 7 regions were evaluated according to the tectonic specifications and data set. In this study for every region original data were used without any change; b- values, a- values, Magnitude of completeness (Mc) were calculated. For the calculation of b- values focal depth was selected as h= 0-50 km. One of the important complications for the seismic catalogs is discriminating real (natural) seismic events from artificial (unnatural) seismic events. Therefore within the original current catalog events especially artificial quarry blasts and mine blasts have been separated by declustering and dequarry methods. Declustering process eliminates induced earthquakes especially occurred in thermal regions, large water basins, mine regions from the original catalogs. Current moment tensor catalog prepared by Kalafat, 2015 the faulting type map of the region was prepared. As a result, for each region it is examined if there is a relation between fault type and b- values. In this study, the hypothesis of the relation between previously evaluated and currently ongoing extensional, compression, strike-slip fault regimes in Turkey and b- values are tested one more time. This study was supported by the Department of Science Fellowship and Grant programs (2014-2219) of TUBITAK (The Scientific and Technological Research Councilof Turkey). It also encourages the conduct of the study and support the constructive contributionthat Prof.Dr. Nafi TOKSÖZ to offer my eternal gratitude.

  13. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  14. Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano

    NASA Astrophysics Data System (ADS)

    Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.

    2012-04-01

    Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic

  15. An Advanced Real-Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Takahashi, I.; Nakamura, H.; Suzuki, W.; Kunugi, T.; Aoi, S.; Fujiwara, H.

    2015-12-01

    J-RISQ (Japan Real-time Information System for earthquake) has been developing in NIED for appropriate first-actions to big earthquakes. When an earthquake occurs, seismic intensities (SI) are calculated first at each observation station and sent to the Data Management Center in different timing. The system begins the first estimation when the number of the stations observing the SI of 2.5 or larger exceeds the threshold amount. It estimates SI distribution, exposed population and earthquake damage on buildings by using basic data for estimation, such as subsurface amplification factors, population, and building information. It has been accumulated in J-SHIS (Japan Seismic Information Station) developed by NIED, a public portal for seismic hazard information across Japan. The series of the estimation is performed for each 250m square mesh and finally the estimated data is converted into information for each municipality. Since October 2013, we have opened estimated SI, exposed population etc. to the public through the website by making full use of maps and tables.In the previous system, we sometimes could not inspect the information of the surrounding areas out of the range suffered from strong motions, or the details of the focusing areas, and could not confirm whether the present information was the latest or not without accessing the website. J-RISQ has been advanced by introducing the following functions to settle those problems and promote utilization in local areas or in personal levels. In addition, the website in English has been released.・It has become possible to focus on the specific areas and inspect enlarged information.・The estimated information can be downloaded in the form of KML.・The estimated information can be updated automatically and be provided as the latest one.・The newest information can be inspected by using RSS readers or browsers corresponding to RSS.・Exclusive pages for smartphones have been prepared.The information estimated

  16. Fractal dynamics of earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bak, P.; Chen, K.

    1995-05-01

    Many objects in nature, from mountain landscapes to electrical breakdown and turbulence, have a self-similar fractal spatial structure. It seems obvious that to understand the origin of self-similar structures, one must understand the nature of the dynamical processes that created them: temporal and spatial properties must necessarily be completely interwoven. This is particularly true for earthquakes, which have a variety of fractal aspects. The distribution of energy released during earthquakes is given by the Gutenberg-Richter power law. The distribution of epicenters appears to be fractal with dimension D {approx} 1--1.3. The number of after shocks decay as a function ofmore » time according to the Omori power law. There have been several attempts to explain the Gutenberg-Richter law by starting from a fractal distribution of faults or stresses. But this is a hen-and-egg approach: to explain the Gutenberg-Richter law, one assumes the existence of another power-law--the fractal distribution. The authors present results of a simple stick slip model of earthquakes, which evolves to a self-organized critical state. Emphasis is on demonstrating that empirical power laws for earthquakes indicate that the Earth`s crust is at the critical state, with no typical time, space, or energy scale. Of course the model is tremendously oversimplified; however in analogy with equilibrium phenomena they do not expect criticality to depend on details of the model (universality).« less

  17. Discrimination of the earthquake-origin microwave emission from the data of the spaceborne radiometer

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Takano, T.

    2007-12-01

    Formerly, we found the microwave emission during rock crash in a laboratory for the first time in the world, and calibrated the emitted power. The detected signal is a sequence of pulses which include microwaves at the selected frequency bands of 300MHz, 2GHz and 22GHz. This fact suggested another means to detect an earthquake which is associated with rock crash or plate slip. For this purpose, we have analyzed the data obtained by the microwave radiometer, AMSR-E loaded on the satellite Aqua. Generally, since a microwave emission observed by AMSR-E is affected by various factors (e.g., emission of the earth's surface and emission, absorption and scattering of the atmosphere), we developed some analysis techniques first. Then, we have successfully extracted features observed only at the earthquake occurrence by these techniques. This earthquake was occurred at Morocco in 2004. Since the depth of the seismic center was shallower and the magnitude was larger, we have specifically focused on analysis of this earthquake. This presentation first presents the estimation of the received power by a receiver aboard a satellite. Then, the data obtained by AMSR-E are described including the disturbances or ambiguity of the data. The techniques to extract microwave signatures out of disturbances are given. Finally, an example of the data analysis is explained in the case of Morocco earthquake to show distinct emission of microwaves in relation with geological features.

  18. National Earthquake Hazards Reduction Program; time to expand

    USGS Publications Warehouse

    Steinbrugge, K.V.

    1990-01-01

    All of us in earthquake engineering, seismology, and many related disciplines have been directly or indirectly affected by the National Earthquake Hazards Reduction Program (NEHRP). This program was the result of the Earthquake Hazards Reduction Act of 1977 (Public Law 95-124). With well over a decade of experience, should this expression of public policy now take a different or expanded role? 

  19. Cluster-search based monitoring of local earthquakes in SeisComP3

    NASA Astrophysics Data System (ADS)

    Roessler, D.; Becker, J.; Ellguth, E.; Herrnkind, S.; Weber, B.; Henneberger, R.; Blanck, H.

    2016-12-01

    We present a new cluster-search based SeisComP3 module for locating local and regional earthquakes in real time. Real-time earthquake monitoring systems such as SeisComP3 provide the backbones for earthquake early warning (EEW), tsunami early warning (TEW) and the rapid assessment of natural and induced seismicity. For any earthquake monitoring system fast and accurate event locations are fundamental determining the reliability and the impact of further analysis. SeisComP3 in the OpenSource version includes a two-stage detector for picking P waves and a phase associator for locating earthquakes based on P-wave detections. scanloc is a more advanced earthquake location program developed by gempa GmbH with seamless integration into SeisComP3. scanloc performs advanced cluster search to discriminate earthquakes occurring closely in space and time and makes additional use of S-wave detections. It has proven to provide fast and accurate earthquake locations at local and regional distances where it outperforms the base SeisComP3 tools. We demonstrate the performance of scanloc for monitoring induced seismicity as well as local and regional earthquakes in different tectonic regimes including subduction, spreading and intra-plate regions. In particular we present examples and catalogs from real-time monitoring of earthquake in Northern Chile based on data from the IPOC network by GFZ German Research Centre for Geosciences for the recent years. Depending on epicentral distance and data transmission, earthquake locations are available within a few seconds after origin time when using scanloc. The association of automatic S-wave detections provides a better constraint on focal depth.

  20. Fast rise times and the physical mechanism of deep earthquakes

    NASA Technical Reports Server (NTRS)

    Houston, H.; Williams, Q.

    1991-01-01

    A systematic global survey of the rise times and stress drops of deep and intermediate earthquakes is reported. When the rise times are scaled to the seismic moment release of the events, their average is nearly twice as fast for events deeper than about 450 km as for shallower events.

  1. Near Real-Time Earthquake Exposure and Damage Assessment: An Example from Turkey

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Çomoǧlu, Mustafa; Erdik, Mustafa

    2014-05-01

    Confined by infamous strike-slip North Anatolian Fault from the north and by the Hellenic subduction trench from the south Turkey is one of the most seismically active countries in Europe. Due this increased exposure and the fragility of the building stock Turkey is among the top countries exposed to earthquake hazard in terms of mortality and economic losses. In this study we focus recent and ongoing efforts to mitigate the earthquake risk in near real-time. We present actual results of recent earthquakes, such as the M6 event off-shore Antalya which occurred on 28 December 2013. Starting at the moment of detection, we obtain a preliminary ground motion intensity distribution based on epicenter and magnitude. Our real-time application is further enhanced by the integration of the SeisComp3 ground motion parameter estimation tool with the Earthquake Loss Estimation Routine (ELER). SeisComp3 provides the online station parameters which are then automatically incorporated into the ShakeMaps produced by ELER. The resulting ground motion distributions are used together with the building inventory to calculate expected number of buildings in various damage states. All these analysis are conducted in an automated fashion and are communicated within a few minutes of a triggering event. In our efforts to disseminate earthquake information to the general public we make extensive use of social networks such as Tweeter and collaborate with mobile phone operators.

  2. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk

  3. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  4. Cascadia Slow Earthquakes: Strategies for Time Independent Inversion of Displacement Fields

    NASA Astrophysics Data System (ADS)

    Szeliga, W. M.; Melbourne, T. I.; Miller, M. M.; Santillan, V. M.

    2004-12-01

    Continuous observations using Global Positioning System geodesy (CGPS) have revealed periodic slow or silent earthquakes along the Cascadia subduction zone with a spectrum of timing and periodicity. These creep events perturb time series of GPS observations and yield coherent displacement fields that relate to the extent and magnitude of fault displacement. In this study, time independent inversions of the surface displacement fields that accompany eight slow earthquakes characterize slip distributions along the plate interface for each event. The inversions employed in this study utilize Okada's elastic dislocation model and a non- negative least squares approach. Methodologies for optimizing the slip distribution smoothing parameter for a particular station distribution have also been investigated, significantly reducing the number of possible slip distributions and the range of estimates for total moment release for each event. The discretized slip distribution calculated for multiple creep events identifies areas of the Cascadia plate interface where slip persistently recurs. The current hypothesis, that slow earthquakes are modulated by forced fluid flow, leads to the possibility that some regions of the Cascadia plate interface may display fault patches preferentially exploited by fluid flow. Thus, the identification of regions of the plate interface that repeatedly slip during slow events may yield important information regarding the identification of these fluid pathways.

  5. Real time drilling mud gas response to small-moderate earthquakes in Wenchuan earthquake Scientific Drilling Hole-1 in SW China

    NASA Astrophysics Data System (ADS)

    Gong, Zheng; Li, Haibing; Tang, Lijun; Lao, Changling; Zhang, Lei; Li, Li

    2017-05-01

    We investigated the real time drilling mud gas of the Wenchuan earthquake Fault Scientific Drilling Hole-1 and their responses to 3918 small-moderate aftershocks happened in the Longmenshan fault zone. Gas profiles for Ar, CH4, He, 222Rn, CO2, H2, N2, O2 are obtained. Seismic wave amplitude, energy density and static strain are calculated to evaluate their power of influence to the drilling site. Mud gases two hours before and after each earthquake are carefully analyzed. In total, 25 aftershocks have major mud gas response, the mud gas concentrations vary dramatically immediately or minutes after the earthquakes. Different gas species respond to earthquakes in different manners according to local lithology encountered during the drill. The gas variations are likely controlled by dynamic stress changes, rather than static stress changes. They have the seismic energy density between 10-5 and 1.0 J/m3 whereas the static strain are mostly less than 10-8. We suggest that the limitation of the gas sources and the high hydraulic diffusivity of the newly ruptured fault zone could have inhibited the drilling mud gas behaviors, they are only able to respond to a small portion of the aftershocks. This work is important for the understanding of earthquake related hydrological changes.

  6. Gravitational potential as a source of earthquake energy

    USGS Publications Warehouse

    Barrows, L.; Langer, C.J.

    1981-01-01

    Some degree of tectonic stress within the earth originates from gravity acting upon density structures. The work performed by this "gravitational tectonics stress" must have formerly existed as gravitational potential energy contained in the stress-causing density structure. According to the elastic rebound theory (Reid, 1910), the energy of earthquakes comes from an elastic strain field built up by fairly continuous elastic deformation in the period between events. For earthquakes resulting from gravitational tectonic stress, the elastic rebound theory requires the transfer of energy from the gravitational potential of the density structures into an elastic strain field prior to the event. An alternate theory involves partial gravitational collapse of the stress-causing density structures. The earthquake energy comes directly from a net decrease in gravitational potential energy. The gravitational potential energy released at the time of the earthquake is split between the energy released by the earthquake, including work done in the fault zone and an increase in stored elastic strain energy. The stress associated with this elastic strain field should oppose further fault slip. ?? 1981.

  7. Earthquake-origin expansion of the Earth inferred from a spherical-Earth elastic dislocation theory

    NASA Astrophysics Data System (ADS)

    Xu, Changyi; Sun, Wenke

    2014-12-01

    In this paper, we propose an approach to compute the coseismic Earth's volume change based on a spherical-Earth elastic dislocation theory. We present a general expression of the Earth's volume change for three typical dislocations: the shear, tensile and explosion sources. We conduct a case study for the 2004 Sumatra earthquake (Mw9.3), the 2010 Chile earthquake (Mw8.8), the 2011 Tohoku-Oki earthquake (Mw9.0) and the 2013 Okhotsk Sea earthquake (Mw8.3). The results show that mega-thrust earthquakes make the Earth expand and earthquakes along a normal fault make the Earth contract. We compare the volume changes computed for finite fault models and a point source of the 2011 Tohoku-Oki earthquake (Mw9.0). The big difference of the results indicates that the coseismic changes in the Earth's volume (or the mean radius) are strongly dependent on the earthquakes' focal mechanism, especially the depth and the dip angle. Then we estimate the cumulative volume changes by historical earthquakes (Mw ≥ 7.0) since 1960, and obtain an Earth mean radius expanding rate about 0.011 mm yr-1.

  8. Insights into the origins of drumbeat earthquakes, periodic low frequency seismicity, and plug degradation from multi-instrument monitoring at Tungurahua volcano, Ecuador, April 2015

    NASA Astrophysics Data System (ADS)

    Bell, Andrew; Hernandez, Stephen; Gaunt, Elizabeth; Mothes, Patricia; Hidalgo, Silvana; Ruiz, Mario

    2016-04-01

    Highly-periodic repeating 'drumbeat' earthquakes have been reported from several andesitic and dacitic volcanoes. Physical models for the origin of drumbeat earthquakes incorporate, to different extents, the incremental upward movement of viscous magma. However, the roles played by stick-slip friction, brittle failure, and fluid flow, and the relations between drumbeat earthquakes and other low-frequency seismic signals, remain controversial. Here we report the results of analysis of three weeks of geophysical data recorded during an unrest episode at Tungurahua, an andesitic stratovolcano in Ecuador, during April 2015, by the monitoring network of the Instituto Geofisico of Ecuador. Combined seismic, geodetic, infrasound, and gas monitoring has provided new insights into the origins of periodic low-frequency seismic signals, conduit processes, and the nature of current unrest. Over the three-week period, the relative seismic amplitude (RSAM) correlated closely with short-term deformation rates and gas fluxes. However, the characteristics of the seismic signals, as recorded at a short-period station closest to the summit crater, changed considerably with time. Initially high RSAM and gas fluxes, with modest ash emissions, were associated with continuous and 'pulsed' tremor signals (amplitude modulated, with 30-100 second periods). As activity levels decreased over several days, tremor episodes became increasingly intermittent, and short-lived bursts of low-frequency earthquakes with quasiperiodic inter-event times were observed. Following one day of quiescence, the onset of pronounced low frequency drumbeat earthquakes signalled the resumption of elevated unrest, initially with mean inter-event times of 32 seconds, and later increasing to 74 seconds and longer, with periodicity progressively breaking down over several days. A reduction in RSAM was then followed by one week of persistent, quasiperiodic, longer-duration emergent low-frequency pulses, including

  9. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different

  10. a Collaborative Cyberinfrastructure for Earthquake Seismology

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  11. Compiling an earthquake catalogue for the Arabian Plate, Western Asia

    NASA Astrophysics Data System (ADS)

    Deif, Ahmed; Al-Shijbi, Yousuf; El-Hussain, Issa; Ezzelarab, Mohamed; Mohamed, Adel M. E.

    2017-10-01

    The Arabian Plate is surrounded by regions of relatively high seismicity. Accounting for this seismicity is of great importance for seismic hazard and risk assessments, seismic zoning, and land use. In this study, a homogenous earthquake catalogue of moment-magnitude (Mw) for the Arabian Plate is provided. The comprehensive and homogenous earthquake catalogue provided in the current study spatially involves the entire Arabian Peninsula and neighboring areas, covering all earthquake sources that can generate substantial hazard for the Arabian Plate mainland. The catalogue extends in time from 19 to 2015 with a total number of 13,156 events, of which 497 are historical events. Four polygons covering the entire Arabian Plate were delineated and different data sources including special studies, local, regional and international catalogues were used to prepare the earthquake catalogue. Moment magnitudes (Mw) that provided by original sources were given the highest magnitude type priority and introduced to the catalogues with their references. Earthquakes with magnitude differ from Mw were converted into this scale applying empirical relationships derived in the current or in previous studies. The four polygons catalogues were included in two comprehensive earthquake catalogues constituting the historical and instrumental periods. Duplicate events were identified and discarded from the current catalogue. The present earthquake catalogue was declustered in order to contain only independent events and investigated for the completeness with time of different magnitude spans.

  12. Fault healing promotes high-frequency earthquakes in laboratory experiments and on natural faults

    USGS Publications Warehouse

    McLaskey, Gregory C.; Thomas, Amanda M.; Glaser, Steven D.; Nadeau, Robert M.

    2012-01-01

    Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

  13. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  14. Feasibility study of short-term earthquake prediction using ionospheric anomalies immediately before large earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2017-12-01

    We showed that positive and negative electron density anomalies emerge above the fault immediately before they rupture, 40/20/10 minutes before Mw9/8/7 earthquakes (Heki, 2011 GRL; Heki and Enomoto, 2013 JGR; He and Heki 2017 JGR). These signals are stronger for earthquake with larger Mw and under higher background vertical TEC (total electron conetent) (Heki and Enomoto, 2015 JGR). The epicenter, the positive and the negative anomalies align along the local geomagnetic field (He and Heki, 2016 GRL), suggesting electric fields within ionosphere are responsible for making the anomalies (Kuo et al., 2014 JGR; Kelley et al., 2017 JGR). Here we suppose the next Nankai Trough earthquake that may occur within a few tens of years in Southwest Japan, and will discuss if we can recognize its preseismic signatures in TEC by real-time observations with GNSS.During high geomagnetic activities, large-scale traveling ionospheric disturbances (LSTID) often propagate from auroral ovals toward mid-latitude regions, and leave similar signatures to preseismic anomalies. This is a main obstacle to use preseismic TEC changes for practical short-term earthquake prediction. In this presentation, we show that the same anomalies appeared 40 minutes before the mainshock above northern Australia, the geomagnetically conjugate point of the 2011 Tohoku-oki earthquake epicenter. This not only demonstrates that electric fields play a role in making the preseismic TEC anomalies, but also offers a possibility to discriminate preseismic anomalies from those caused by LSTID. By monitoring TEC in the conjugate areas in the two hemisphere, we can recognize anomalies with simultaneous onset as those caused by within-ionosphere electric fields (e.g. preseismic anomalies, night-time MSTID) and anomalies without simultaneous onset as gravity-wave origin disturbances (e.g. LSTID, daytime MSTID).

  15. Increasing seismicity in the U. S. midcontinent: Implications for earthquake hazard

    USGS Publications Warehouse

    Ellsworth, William L.; Llenos, Andrea L.; McGarr, Arthur F.; Michael, Andrew J.; Rubinstein, Justin L.; Mueller, Charles S.; Petersen, Mark D.; Calais, Eric

    2015-01-01

    Earthquake activity in parts of the central United States has increased dramatically in recent years. The space-time distribution of the increased seismicity, as well as numerous published case studies, indicates that the increase is of anthropogenic origin, principally driven by injection of wastewater coproduced with oil and gas from tight formations. Enhanced oil recovery and long-term production also contribute to seismicity at a few locations. Preliminary hazard models indicate that areas experiencing the highest rate of earthquakes in 2014 have a short-term (one-year) hazard comparable to or higher than the hazard in the source region of tectonic earthquakes in the New Madrid and Charleston seismic zones.

  16. The mechanism of earthquake

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    The physical mechanism of earthquake remains a challenging issue to be clarified. Seismologists used to attribute shallow earthquake to the elastic rebound of crustal rocks. The seismic energy calculated following the elastic rebound theory and with the data of experimental results upon rocks, however, shows a large discrepancy with measurement — a fact that has been dubbed as “the heat flow paradox”. For the intermediate-focus and deep-focus earthquakes, both occurring in the region of the mantle, there is not reasonable explanation either. This paper will discuss the physical mechanism of earthquake from a new perspective, starting from the fact that both the crust and the mantle are discrete collective system of matters with slow dynamics, as well as from the basic principles of physics, especially some new concepts of condensed matter physics emerged in the recent years. (1) Stress distribution in earth’s crust: Without taking the tectonic force into account, according to the rheological principle of “everything flows”, the normal stress and transverse stress must be balanced due to the effect of gravitational pressure over a long period of time, thus no differential stress in the original crustal rocks is to be expected. The tectonic force is successively transferred and accumulated via stick-slip motions of rock blocks to squeeze the fault gouge and then exerted upon other rock blocks. The superposition of such additional lateral tectonic force and the original stress gives rise to the real-time stress in crustal rocks. The mechanical characteristics of fault gouge are different from rocks as it consists of granular matters. The elastic moduli of the fault gouges are much less than those of rocks, and they become larger with increasing pressure. This peculiarity of the fault gouge leads to a tectonic force increasing with depth in a nonlinear fashion. The distribution and variation of the tectonic stress in the crust are specified. (2) The

  17. Photocopy of photograph (original located at Mare Island Archives). Original ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of photograph (original located at Mare Island Archives). Original photographer unknown. View of sawmill after earthquake of 1898. - Mare Island Naval Shipyard, East of Nave Drive, Vallejo, Solano County, CA

  18. New ideas about the physics of earthquakes

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Klein, William

    1995-07-01

    It may be no exaggeration to claim that this most recent quaddrenium has seen more controversy and thus more progress in understanding the physics of earthquakes than any in recent memory. The most interesting development has clearly been the emergence of a large community of condensed matter physicists around the world who have begun working on the problem of earthquake physics. These scientists bring to the study of earthquakes an entirely new viewpoint, grounded in the physics of nucleation and critical phenomena in thermal, magnetic, and other systems. Moreover, a surprising technology transfer from geophysics to other fields has been made possible by the realization that models originally proposed to explain self-organization in earthquakes can also be used to explain similar processes in problems as disparate as brain dynamics in neurobiology (Hopfield, 1994), and charge density waves in solids (Brown and Gruner, 1994). An entirely new sub-discipline is emerging that is focused around the development and analysis of large scale numerical simulations of the dynamics of faults. At the same time, intriguing new laboratory and field data, together with insightful physical reasoning, has led to significant advances in our understanding of earthquake source physics. As a consequence, we can anticipate substantial improvement in our ability to understand the nature of earthquake occurrence. Moreover, while much research in the area of earthquake physics is fundamental in character, the results have many potential applications (Cornell et al., 1993) in the areas of earthquake risk and hazard analysis, and seismic zonation.

  19. Earthquake-enhanced permeability - evidence from carbon dioxide release following the ML 3.5 earthquake in West Bohemia

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Matyska, C.; Heinicke, J.

    2017-02-01

    The West Bohemia/Vogtland region is characterized by earthquake swarm activity and degassing of CO2 of mantle origin. A fast increase of CO2 flow rate was observed 4 days after a ML 3.5 earthquake in May 2014 in the Hartoušov mofette, 9 km from the epicentres. During the subsequent 150 days the flow reached sixfold of the original level, and has been slowly decaying until present. Similar behavior was observed during and after the swarm in 2008 pointing to a fault-valve mechanism in long-term. Here, we present the results of simulation of gas flow in a two dimensional model of Earth's crust composed of a sealing layer at the hypocentre depth which is penetrated by the earthquake fault and releases fluid from a relatively low-permeability lower crust. This simple model is capable of explaining the observations, including the short travel time of the flow pulse from 8 km depth to the surface, long-term flow increase and its subsequent slow decay. Our model is consistent with other analyse of the 2014 aftershocks which attributes their anomalous character to exponentially decreasing external fluid force. Our observations and model hence track the fluid pressure pulse from depth where it was responsible for aftershocks triggering to the surface where a significant long-term increase of CO2 flow started 4 days later.

  20. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  1. Time functions of deep earthquakes from broadband and short-period stacks

    USGS Publications Warehouse

    Houston, H.; Benz, H.M.; Vidale, J.E.

    1998-01-01

    To constrain dynamic source properties of deep earthquakes, we have systematically constructed broadband time functions of deep earthquakes by stacking and scaling teleseismic P waves from U.S. National Seismic Network, TERRAscope, and Berkeley Digital Seismic Network broadband stations. We examined 42 earthquakes with depths from 100 to 660 km that occurred between July 1, 1992 and July 31, 1995. To directly compare time functions, or to group them by size, depth, or region, it is essential to scale them to remove the effect of moment, which varies by more than 3 orders of magnitude for these events. For each event we also computed short-period stacks of P waves recorded by west coast regional arrays. The comparison of broadband with short-period stacks yields a considerable advantage, enabling more reliable measurement of event duration. A more accurate estimate of the duration better constrains the scaling procedure to remove the effect of moment, producing scaled time functions with both correct timing and amplitude. We find only subtle differences in the broadband time-function shape with moment, indicating successful scaling and minimal effects of attenuation at the periods considered here. The average shape of the envelopes of the short-period stacks is very similar to the average broadband time function. The main variations seen with depth are (1) a mild decrease in duration with increasing depth, (2) greater asymmetry in the time functions of intermediate events compared to deep ones, and (3) unexpected complexity and late moment release for events between 350 and 550 km, with seven of the eight events in that depth interval displaying markedly more complicated time functions with more moment release late in the rupture than most events above or below. The first two results are broadly consistent with our previous studies, while the third is reported here for the first time. The greater complexity between 350 and 550 km suggests greater heterogeneity in

  2. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  3. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  4. Near real-time finite fault source inversion for moderate-large earthquakes in Taiwan using teleseismic P waveform

    NASA Astrophysics Data System (ADS)

    Wong, T. P.; Lee, S. J.; Gung, Y.

    2017-12-01

    Taiwan is located at one of the most active tectonic regions in the world. Rapid estimation of the spatial slip distribution of moderate-large earthquake (Mw6.0) is important for emergency response. It is necessary to have a real-time system to provide the report immediately after earthquake happen. The earthquake activities in the vicinity of Taiwan can be monitored by Real-Time Moment Tensor Monitoring System (RMT) which provides the rapid focal mechanism and source parameters. In this study, we follow up the RMT system to develop a near real-time finite fault source inversion system for the moderate-large earthquakes occurred in Taiwan. The system will be triggered by the RMT System when an Mw6.0 is detected. According to RMT report, our system automatically determines the fault dimension, record length, and rise time. We adopted one segment fault plane with variable rake angle. The generalized ray theory was applied to calculate the Green's function for each subfault. The primary objective of the system is to provide the first order image of coseismic slip pattern and identify the centroid location on the fault plane. The performance of this system had been demonstrated by 23 big earthquakes occurred in Taiwan successfully. The results show excellent data fits and consistent with the solutions from other studies. The preliminary spatial slip distribution will be provided within 25 minutes after an earthquake occurred.

  5. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    PubMed

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  6. Real-time GPS integration for prototype earthquake early warning and near-field imaging of the earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Given, D.; King, N. E.; Lisowski, M.; Langbein, J. O.; Murray-Moraleda, J. R.; Gomberg, J. S.

    2011-12-01

    Over the past several years, USGS has developed the infrastructure for integrating real-time GPS with seismic data in order to improve our ability to respond to earthquakes and volcanic activity. As part of this effort, we have tested real-time GPS processing software components , and identified the most robust and scalable options. Simultaneously, additional near-field monitoring stations have been built using a new station design that combines dual-frequency GPS with high quality strong-motion sensors and dataloggers. Several existing stations have been upgraded in this way, using USGS Multi-Hazards Demonstration Project and American Recovery and Reinvestment Act funds in southern California. In particular, existing seismic stations have been augmented by the addition of GPS and vice versa. The focus of new instrumentation as well as datalogger and telemetry upgrades to date has been along the southern San Andreas fault in hopes of 1) capturing a large and potentially damaging rupture in progress and augmenting inputs to earthquake early warning systems, and 2) recovering high quality recordings on scale of large dynamic displacement waveforms, static displacements and immediate and long-term post-seismic transient deformation. Obtaining definitive records of large ground motions close to a large San Andreas or Cascadia rupture (or volcanic activity) would be a fundamentally important contribution to understanding near-source large ground motions and the physics of earthquakes, including the rupture process and friction associated with crack propagation and healing. Soon, telemetry upgrades will be completed in Cascadia and throughout the Plate Boundary Observatory as well. By collaborating with other groups on open-source automation system development, we will be ready to process the newly available real-time GPS data streams and to fold these data in with existing strong-motion and other seismic data. Data from these same stations will also serve the very

  7. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  8. Real-Time Earthquake Intensity Estimation Using Streaming Data Analysis of Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.

    2017-06-01

    Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher

  9. Role of Equatorial Anomaly in Earthquake time precursive features: A few strong events over West Pacific zone

    NASA Astrophysics Data System (ADS)

    Devi, Minakshi; Patgiri, S.; Barbara, A. K.; Oyama, Koh-Ichiro; Ryu, K.; Depuev, V.; Depueva, A.

    2018-03-01

    The earthquake (EQ) time coupling processes between equator-low-mid latitude ionosphere are complex due to inherent dynamical status of each latitudinal zone and qualified geomagnetic roles working in the system. In an attempt to identify such process, the paper presents temporal and latitudinal variations of ionization density (foF2) covering 45°N to 35°S, during a number of earthquake events (M > 5.5). The approaches adopted for extraction of features by the earthquake induced preparatory processes are discussed in the paper through identification of parameters like the 'EQ time modification in density gradient' defined by δ = (foF2 max - foF2 min)/τmm, where τmm - time span (in days) between EQ modified density maximum and minimum, and the Earthquake time Equatorial Anomaly, i.e. EEA, one of the most significant phenomenon which develops even during night time irrespective of epicenter position. Based on the observations, the paper presents the seismic time coupling dynamics through anomaly like manifestations between equator, low and mid latitude ionosphere bringing in the global Total Electron Content (TEC) features as supporting indices.

  10. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  11. Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing

    2017-12-01

    Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.

  12. Theory of earthquakes interevent times applied to financial markets

    NASA Astrophysics Data System (ADS)

    Jagielski, Maciej; Kutner, Ryszard; Sornette, Didier

    2017-10-01

    We analyze the probability density function (PDF) of waiting times between financial loss exceedances. The empirical PDFs are fitted with the self-excited Hawkes conditional Poisson process with a long power law memory kernel. The Hawkes process is the simplest extension of the Poisson process that takes into account how past events influence the occurrence of future events. By analyzing the empirical data for 15 different financial assets, we show that the formalism of the Hawkes process used for earthquakes can successfully model the PDF of interevent times between successive market losses.

  13. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    USGS Publications Warehouse

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  14. Application of Geostatistical Methods and Machine Learning for spatio-temporal Earthquake Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Schaefer, A. M.; Daniell, J. E.; Wenzel, F.

    2014-12-01

    Earthquake clustering tends to be an increasingly important part of general earthquake research especially in terms of seismic hazard assessment and earthquake forecasting and prediction approaches. The distinct identification and definition of foreshocks, aftershocks, mainshocks and secondary mainshocks is taken into account using a point based spatio-temporal clustering algorithm originating from the field of classic machine learning. This can be further applied for declustering purposes to separate background seismicity from triggered seismicity. The results are interpreted and processed to assemble 3D-(x,y,t) earthquake clustering maps which are based on smoothed seismicity records in space and time. In addition, multi-dimensional Gaussian functions are used to capture clustering parameters for spatial distribution and dominant orientations. Clusters are further processed using methodologies originating from geostatistics, which have been mostly applied and developed in mining projects during the last decades. A 2.5D variogram analysis is applied to identify spatio-temporal homogeneity in terms of earthquake density and energy output. The results are mitigated using Kriging to provide an accurate mapping solution for clustering features. As a case study, seismic data of New Zealand and the United States is used, covering events since the 1950s, from which an earthquake cluster catalogue is assembled for most of the major events, including a detailed analysis of the Landers and Christchurch sequences.

  15. The Effects of the Passage of Time from the 2011 Tohoku Earthquake on the Public's Anxiety about a Variety of Hazards.

    PubMed

    Nakayachi, Kazuya; Nagaya, Kazuhisa

    2016-08-31

    This research investigated whether the Japanese people's anxiety about a variety of hazards, including earthquakes and nuclear accidents, has changed over time since the Tohoku Earthquake in 2011. Data from three nationwide surveys conducted in 2008, 2012, and 2015 were compared to see the change in societal levels of anxiety toward 51 types of hazards. The same two-phase stratified random sampling method was used to create the list of participants in each survey. The results showed that anxiety about earthquakes and nuclear accidents had increased for a time after the Tohoku Earthquake, and then decreased after a four-year time frame with no severe earthquakes and nuclear accidents. It was also revealed that the anxiety level for some hazards other than earthquakes and nuclear accidents had decreased at ten months after the Earthquake, and then remained unchanged after the four years. Therefore, ironically, a major disaster might decrease the public anxiety in general at least for several years.

  16. Evaluation of Earthquake Detection Performance in Terms of Quality and Speed in SEISCOMP3 Using New Modules Qceval, Npeval and Sceval

    NASA Astrophysics Data System (ADS)

    Roessler, D.; Weber, B.; Ellguth, E.; Spazier, J.

    2017-12-01

    The geometry of seismic monitoring networks, site conditions and data availability as well as monitoring targets and strategies typically impose trade-offs between data quality, earthquake detection sensitivity, false detections and alert times. Network detection capabilities typically change with alteration of the seismic noise level by human activity or by varying weather and sea conditions. To give helpful information to operators and maintenance coordinators, gempa developed a range of tools to evaluate earthquake detection and network performance including qceval, npeval and sceval. qceval is a module which analyzes waveform quality parameters in real-time and deactivates and reactivates data streams based on waveform quality thresholds for automatic processing. For example, thresholds can be defined for latency, delay, timing quality, spikes and gaps count and rms. As changes in the automatic processing have a direct influence on detection quality and speed, another tool called "npeval" was designed to calculate in real-time the expected time needed to detect and locate earthquakes by evaluating the effective network geometry. The effective network geometry is derived from the configuration of stations participating in the detection. The detection times are shown as an additional layer on the map and updated in real-time as soon as the effective network geometry changes. Yet another new tool, "sceval", is an automatic module which classifies located seismic events (Origins) in real-time. sceval evaluates the spatial distribution of the stations contributing to an Origin. It confirms or rejects the status of Origins, adds comments or leaves the Origin unclassified. The comments are passed to an additional sceval plug-in where the end user can customize event types. This unique identification of real and fake events in earthquake catalogues allows to lower network detection thresholds. In real-time monitoring situations operators can limit the processing to

  17. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  18. Volcanotectonic earthquakes induced by propagating dikes

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2016-04-01

    Volcanotectonic earthquakes are of high frequency and mostly generated by slip on faults. During chamber expansion/contraction earthquakes are distribution in the chamber roof. Following magma-chamber rupture and dike injection, however, earthquakes tend to concentrate around the dike and follow its propagation path, resulting in an earthquake swarm characterised by a number of earthquakes of similar magnitudes. I distinguish between two basic processes by which propagating dikes induce earthquakes. One is due to stress concentration in the process zone at the tip of the dike, the other relates to stresses induced in the walls and surrounding rocks on either side of the dike. As to the first process, some earthquakes generated at the dike tip are related to pure extension fracturing as the tip advances and the dike-path forms. Formation of pure extension fractures normally induces non-double couple earthquakes. There is also shear fracturing in the process zone, however, particularly normal faulting, which produces double-couple earthquakes. The second process relates primarily to slip on existing fractures in the host rock induced by the driving pressure of the propagating dike. Such pressures easily reach 5-20 MPa and induce compressive and shear stresses in the adjacent host rock, which already contains numerous fractures (mainly joints) of different attitudes. In piles of lava flows or sedimentary beds the original joints are primarily vertical and horizontal. Similarly, the contacts between the layers/beds are originally horizontal. As the layers/beds become buried, the joints and contacts become gradually tilted so that the joints and contacts become oblique to the horizontal compressive stress induced by a driving pressure of the (vertical) dike. Also, most of the hexagonal (or pentagonal) columnar joints in the lava flows are, from the beginning, oblique to an intrusive sheet of any attitude. Consequently, the joints and contacts function as potential shear

  19. The Evolution of the Seismic-Aseismic Transition During the Earthquake Cycle: Constraints from the Time-Dependent Depth Distribution of Aftershocks

    NASA Astrophysics Data System (ADS)

    Rolandone, F.; Bürgmann, R.; Nadeau, R.; Freed, A.

    2003-12-01

    We have demonstrated that in the aftermath of large earthquakes, the depth extent of aftershocks shows an immediate deepening from pre-earthquake levels, followed by a time-dependent postseismic shallowing. We use these seismic data to constrain the variation of the depth of the seismic-aseismic transition with time throughout the earthquake cycle. Most studies of the seismic-aseismic transition have focussed on the effect of temperature and/or lithology on the transition either from brittle faulting to viscous flow or from unstable to stable sliding. They have shown that the maximum depth of seismic activity is well correlated with the spatial variations of these two parameters. However, little has been done to examine how the maximum depth of seismogenic faulting varies locally, at the scale of a fault segment, during the course of the earthquake cycle. Geologic and laboratory observations indicate that the depth of the seismic-aseismic transition should vary with strain rate and thus change with time throughout the earthquake cycle. We quantify the time-dependent variations in the depth of seismicity on various strike-slip faults in California before and after large earthquakes. We specifically investigate (1) the deepening of the aftershocks relative to the background seismicity, (2) the time constant of the postseismic shallowing of the deepest earthquakes, and (3) the correlation of the time-dependent pattern with the coseismic slip distribution and the expected stress increase. Together with geodetic measurements, these seismological observations form the basis for developing more sophisticated models for the mechanical evolution of strike-slip shear zones during the earthquake cycle. We develop non-linear viscoelastic models, for which the brittle-ductile transition is not fixed, but varies with assumed temperature and calculated stress gradients. We use them to place constraints on strain rate at depth, on time-dependent rheology, and on the partitioning

  20. Determination of Focal Depths of Earthquakes in the Mid-Oceanic Ridges from Amplitude Spectra of Surface Waves

    DTIC Science & Technology

    1969-06-01

    Foreshock , mainshock and aftershock of the Parkfield, California earthquake of June 28, 1966. b. The Denver earthquake of August 9, 1967. Let us look...into the results of these tests in more details. (1) Test on the main shock, foreshock and aftershock of the Parkfield earthquake of June 28, 1966...According to McEvilly et. al. (1967), the origin times and locations of.these events were the following: Foreshock June 28, 1966, 04:08:56.2 GMT; 350 57.6

  1. Earthquake source parameters from GPS-measured static displacements with potential for real-time application

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2013-01-01

    We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.

  2. Nowcasting Earthquakes: A Comparison of Induced Earthquakes in Oklahoma and at the Geysers, California

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.

    2018-01-01

    Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.

  3. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  4. Origin of short-period signals following P-diffracted waves: A case study of the 1994 Bolivian deep earthquake

    NASA Astrophysics Data System (ADS)

    Tono, Yoko; Yomogida, Kiyoshi

    1997-10-01

    Seismograms of the June 9, 1994, Bolivian deep earthquake recorded at epicentral distances from 100° to 122° show a train of signals with predominant frequencies between 1 and 2 Hz after the arrivals of short-period diffracted P-waves (P diff). We investigate the origin of these signals following P diff by analyzing a total of 20 records from the IRIS broad-band network and the short-period network of New Zealand. The arrivals of late signals continue for over 100 s, that is two times longer than the estimated source duration of this event. Subsequent aftershocks, which cause the following signals, are not expected from the long-period records. These results indicate that the long continuation of short-period signals is not due to the source complexities. The signals following P diff have small incident angles, and their spectra show peaks at about the same frequencies. These characteristics of the following signals exclude the possibility that their origin is shallow structure such as the heterogeneities beneath the stations or upper mantle. P diff propagates a long distance within the heterogeneous region near the core-mantle boundary. We conclude that the short-period signals following the main P diff are scattered waves caused by small-scale heterogeneities near the core-mantle boundary.

  5. Comparison of aftershock sequences between 1975 Haicheng earthquake and 1976 Tangshan earthquake

    NASA Astrophysics Data System (ADS)

    Liu, B.

    2017-12-01

    The 1975 ML 7.3 Haicheng earthquake and the 1976 ML 7.8 Tangshan earthquake occurred in the same tectonic unit. There are significant differences in spatial-temporal distribution, number of aftershocks and time duration for the aftershock sequence followed by these two main shocks. As we all know, aftershocks could be triggered by the regional seismicity change derived from the main shock, which was caused by the Coulomb stress perturbation. Based on the rate- and state- dependent friction law, we quantitative estimated the possible aftershock time duration with a combination of seismicity data, and compared the results from different approaches. The results indicate that, aftershock time durations from the Tangshan main shock is several times of that form the Haicheng main shock. This can be explained by the significant relationship between aftershock time duration and earthquake nucleation history, normal stressand shear stress loading rateon the fault. In fact the obvious difference of earthquake nucleation history from these two main shocks is the foreshocks. 1975 Haicheng earthquake has clear and long foreshocks, while 1976 Tangshan earthquake did not have clear foreshocks. In that case, abundant foreshocks may mean a long and active nucleation process that may have changed (weakened) the rocks in the source regions, so they should have a shorter aftershock sequences for the reason that stress in weak rocks decay faster.

  6. Prioritizing earthquake and tsunami alerting efforts

    NASA Astrophysics Data System (ADS)

    Allen, R. M.; Allen, S.; Aranha, M. A.; Chung, A. I.; Hellweg, M.; Henson, I. H.; Melgar, D.; Neuhauser, D. S.; Nof, R. N.; Strauss, J. A.

    2015-12-01

    The timeline of hazards associated with earthquakes ranges from seconds for the strong shaking at the epicenter, to minutes for strong shaking at more distant locations in big quakes, to tens of minutes for a local tsunami. Earthquake and tsunami warning systems must therefore include very fast initial alerts, while also taking advantage of available time in bigger and tsunami-generating quakes. At the UC Berkeley Seismological Laboratory we are developing a suite of algorithms to provide the fullest possible information about earthquake shaking and tsunami inundation from seconds to minutes after a quake. The E-larmS algorithm uses the P-wave to rapidly detect an earthquake and issue a warning. It is currently issuing alerts to test users in as little as 3 sec after the origin time. Development of a new waveform detector may lead to even faster alerts. G-larmS uses permanent deformation estimates from GNSS stations to estimate the geometry and extent of rupture underway providing more accurate ground shaking estimates in big (M>~7) earthquakes. It performed well in the M6.0 2014 Napa earthquake. T-larmS is a new algorithm designed to extend alert capabilities to tsunami inundation. Rapid estimates of source characteristics for subduction zones event can not only be used to warn of the shaking hazard, but also the local tsunami inundation hazard. These algorithms are being developed, implemented and tested with a focus on the western US, but are also now being tested in other parts of the world including Israel, Turkey, Korea and Chile. Beta users in the Bay Area are receiving the alerts and beginning to implement automated actions. They also provide feedback on users needs, which has led to the development of the MyEEW smartphone app. This app allows beta users to receive the alerts on their cell phones. All these efforts feed into our ongoing assessment of directions and priorities for future development and implementation efforts.

  7. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  8. Remote Imaging of Earthquake Characteristics Along Oceanic Transforms

    NASA Astrophysics Data System (ADS)

    Cleveland, M.; Ammon, C. J.

    2014-12-01

    Compared with subduction and continental transform systems, many characteristics of oceanic transform faults (OTF) are better defined (first-order structure and composition, thermal properties, etc.). Still, many aspects of earthquake behavior along OTFs remain poorly understood as a result of their relative remoteness. But the substantial aseismic deformation (averaging roughly 85%) that occurs along OTFs and the implied interaction of aseismic with seismic deformation is an opportunity to explore fundamental earthquake nucleation and rupture processes. However, the study of OTF earthquake properties is not easy because these faults are often located in remote regions, lacking nearby seismic networks. Thus, many standard network-based seismic approaches are infeasible, but some can be adapted to the effort. For example, double-difference methods applied to cross-correlation measured Rayleigh wave time shifts is an effective tool to provide greatly improved relative epicentroid locations, origin-time shifts, and relative event magnitudes for earthquakes in remote regions. The same comparative waveform measurements can provide insight into rupture directivity of the larger OTF events. In this study, we calculate improved relative earthquake locations and magnitudes of earthquakes along the Blanco Fracture Zone in the northeast Pacific Ocean and compare and contrast that work with a study of the more remote Menard Transform Fault (MTF), located in the southeast Pacific Ocean. For the Blanco, we work exclusively with Rayleigh (R1) observations exploiting the dense networks in the northern hemisphere. For the MTF, we combine R1 with Love (G1) observations to map and to analyze the distribution of strong asperities along this remote, 200-km-long fault. Specifically, we attempt to better define the relationship between observed near-transform normal and vertical strike-slip earthquakes in the vicinity of the MTF. We test our ability to use distant observations (the

  9. Earthquake Clustering in Noisy Viscoelastic Systems

    NASA Astrophysics Data System (ADS)

    Dicaprio, C. J.; Simons, M.; Williams, C. A.; Kenner, S. J.

    2006-12-01

    Geologic studies show evidence for temporal clustering of earthquakes on certain fault systems. Since post- seismic deformation may result in a variable loading rate on a fault throughout the inter-seismic period, it is reasonable to expect that the rheology of the non-seismogenic lower crust and mantle lithosphere may play a role in controlling earthquake recurrence times. Previously, the role of rheology of the lithosphere on the seismic cycle had been studied with a one-dimensional spring-dashpot-slider model (Kenner and Simons [2005]). In this study we use the finite element code PyLith to construct a two-dimensional continuum model a strike-slip fault in an elastic medium overlying one or more linear Maxwell viscoelastic layers loaded in the far field by a constant velocity boundary condition. Taking advantage of the linear properties of the model, we use the finite element solution to one earthquake as a spatio-temporal Green's function. Multiple Green's function solutions, scaled by the size of each earthquake, are then summed to form an earthquake sequence. When the shear stress on the fault reaches a predefined yield stress it is allowed to slip, relieving all accumulated shear stress. Random variation in the fault yield stress from one earthquake to the next results in a temporally clustered earthquake sequence. The amount of clustering depends on a non-dimensional number, W, called the Wallace number. For models with one viscoelastic layer, W is equal to the standard deviation of the earthquake stress drop divided by the viscosity times the tectonic loading rate. This definition of W is modified from the original one used in Kenner and Simons [2005] by using the standard deviation of the stress drop instead of the mean stress drop. We also use a new, more appropriate, metric to measure the amount of temporal clustering of the system. W is the ratio of the viscoelastic relaxation rate of the system to the tectonic loading rate of the system. For values of

  10. The Effects of the Passage of Time from the 2011 Tohoku Earthquake on the Public’s Anxiety about a Variety of Hazards

    PubMed Central

    Nakayachi, Kazuya; Nagaya, Kazuhisa

    2016-01-01

    This research investigated whether the Japanese people’s anxiety about a variety of hazards, including earthquakes and nuclear accidents, has changed over time since the Tohoku Earthquake in 2011. Data from three nationwide surveys conducted in 2008, 2012, and 2015 were compared to see the change in societal levels of anxiety toward 51 types of hazards. The same two-phase stratified random sampling method was used to create the list of participants in each survey. The results showed that anxiety about earthquakes and nuclear accidents had increased for a time after the Tohoku Earthquake, and then decreased after a four-year time frame with no severe earthquakes and nuclear accidents. It was also revealed that the anxiety level for some hazards other than earthquakes and nuclear accidents had decreased at ten months after the Earthquake, and then remained unchanged after the four years. Therefore, ironically, a major disaster might decrease the public anxiety in general at least for several years. PMID:27589780

  11. Rapid repair of severely earthquake-damaged bridge piers with flexural-shear failure mode

    NASA Astrophysics Data System (ADS)

    Sun, Zhiguo; Wang, Dongsheng; Du, Xiuli; Si, Bingjun

    2011-12-01

    An experimental study was conducted to investigate the feasibility of a proposed rapid repair technique for severely earthquake-damaged bridge piers with flexural-shear failure mode. Six circular pier specimens were first tested to severe damage in flexural-shear mode and repaired using early-strength concrete with high-fluidity and carbon fiber reinforced polymers (CFRP). After about four days, the repaired specimens were tested to failure again. The seismic behavior of the repaired specimens was evaluated and compared to the original specimens. Test results indicate that the proposed repair technique is highly effective. Both shear strength and lateral displacement of the repaired piers increased when compared to the original specimens, and the failure mechanism of the piers shifted from flexural-shear failure to ductile flexural failure. Finally, a simple design model based on the Seible formulation for post-earthquake repair design was compared to the experimental results. It is concluded that the design equation for bridge pier strengthening before an earthquake could be applicable to seismic repairs after an earthquake if the shear strength contribution of the spiral bars in the repaired piers is disregarded and 1.5 times more FRP sheets is provided.

  12. Real-Time Detection of Rupture Development: Earthquake Early Warning Using P Waves From Growing Ruptures

    NASA Astrophysics Data System (ADS)

    Kodera, Yuki

    2018-01-01

    Large earthquakes with long rupture durations emit P wave energy throughout the rupture period. Incorporating late-onset P waves into earthquake early warning (EEW) algorithms could contribute to robust predictions of strong ground motion. Here I describe a technique to detect in real time P waves from growing ruptures to improve the timeliness of an EEW algorithm based on seismic wavefield estimation. The proposed P wave detector, which employs a simple polarization analysis, successfully detected P waves from strong motion generation areas of the 2011 Mw 9.0 Tohoku-oki earthquake rupture. An analysis using 23 large (M ≥ 7) events from Japan confirmed that seismic intensity predictions based on the P wave detector significantly increased lead times without appreciably decreasing the prediction accuracy. P waves from growing ruptures, being one of the fastest carriers of information on ongoing rupture development, have the potential to improve the performance of EEW systems.

  13. Real-time Seismicity Evaluation as a Tool for the Earthquake and Tsunami Short-Term Hazard Assessment (Invited)

    NASA Astrophysics Data System (ADS)

    Papadopoulos, G. A.

    2010-12-01

    Seismic activity is a 3-D process varying in the space-time-magnitude domains. When in a target area the short-term activity deviates significantly from the usual (background) seismicity, then the modes of activity may include swarms, temporary quiescence, foreshock-mainshock-aftershock sequences, doublets and multiplets. This implies that making decision for civil protection purposes requires short-term seismic hazard assessment and evaluation. When a sizable earthquake takes place the critical question is about the nature of the event: mainshock or a foreshock which foreshadows the occurrence of a biger one? Also, the seismicity increase or decrease in a target area may signify either precursory changes or just transient seismicity variations (e.g. swarms) which do not conclude with a strong earthquake. Therefore, the real-time seismicity evaluation is the backbone of the short-term hazard assessment. The algorithm FORMA (Foreshock-Mainshock-Aftershock) is presented which detects and updates automatically and in near real-time significant variations of the seismicity according to the earthquake data flow from the monitoring center. The detection of seismicity variations is based on an expert system which for a given target area indicates the mode of seismicity from the variation of two parameters: the seismicity rate, r, and the b-value of the magnitude-frequency relation. Alert levels are produced according to the significance levels of the changes of r and b. The good performance of FORMA was verified retrospectively in several earthquake cases, e.g. for the L’ Aquila, Italy, 2009 earthquake sequence (Mmax 6.3) (Papadopoulos et al., 2010). Real-time testing was executed during January 2010 with the strong earthquake activity (Mmax 5.6) in the Corinth Rift, Central Greece. Evaluation outputs were publicly documented on a nearly daily basis with successful results. Evaluation of coastal and submarine earthquake activity is also of crucial importance for the

  14. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  15. Real-time determination of the worst tsunami scenario based on Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya

    2016-04-01

    In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length

  16. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  17. Implications of ground water chemistry and flow patterns for earthquake studies.

    PubMed

    Guangcai, Wang; Zuochen, Zhang; Min, Wang; Cravotta, Charles A; Chenglong, Liu

    2005-01-01

    Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for delta18O, deltaD, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57 degrees C to 160 degrees C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas.

  18. Implications of ground water chemistry and flow patterns for earthquake studies

    USGS Publications Warehouse

    Guangcai, W.; Zuochen, Z.; Min, W.; Cravotta, C.A.; Chenglong, L.

    2005-01-01

    Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquake prediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for ??18O, ??D, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57??C to 160??C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas. Copyright ?? 2005 National Ground Water Association.

  19. Time-dependent neo-deterministic seismic hazard scenarios for the 2016 Central Italy earthquakes sequence

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Kossobokov, Vladimir; Romashkova, Leontina; Panza, Giuliano F.

    2017-04-01

    Predicting earthquakes and related ground shaking is widely recognized among the most challenging scientific problems, both for societal relevance and intrinsic complexity of the problem. The development of reliable forecasting tools requires their rigorous formalization and testing, first in retrospect, and then in an experimental real-time mode, which imply a careful application of statistics to data sets of limited size and different accuracy. Accordingly, the operational issues of prospective validation and use of time-dependent neo-deterministic seismic hazard scenarios are discussed, reviewing the results in their application in Italy and surroundings. Long-term practice and results obtained for the Italian territory in about two decades of rigorous prospective testing, support the feasibility of earthquake forecasting based on the analysis of seismicity patterns at the intermediate-term middle-range scale. Italy is the only country worldwide where two independent, globally tested, algorithms are simultaneously applied, namely CN and M8S, which permit to deal with multiple sets of seismic precursors to allow for a diagnosis of the intervals of time when a strong event is likely to occur inside a given region. Based on routinely updated space-time information provided by CN and M8S forecasts, an integrated procedure has been developed that allows for the definition of time-dependent seismic hazard scenarios, through the realistic modeling of ground motion by the neo-deterministic approach (NDSHA). This scenario-based methodology permits to construct, both at regional and local scale, scenarios of ground motion for the time interval when a strong event is likely to occur within the alerted areas. CN and M8S predictions, as well as the related time-dependent ground motion scenarios associated with the alarmed areas, are routinely updated since 2006. The issues and results from real-time testing of the integrated NDSHA scenarios are illustrated, with special

  20. Paleo-earthquake timing on the North Anatolian Fault: Where, when, and how sure are we?

    NASA Astrophysics Data System (ADS)

    Fraser, J.; Vanneste, K.; Hubert-Ferrari, A.

    2009-04-01

    The North Anatolian Fault (NAF) traces from the Karilova Triple Junction in the east 1400km into the Aegean Sea in the west, forming a northwardly convex arch across northern Turkey. In the 20th century the NAF ruptured in an approximate east to west migrating sequence of large, destructive and deadly earthquakes. This migrating sequence suggests a simple relationship between crustal loading and fault rupture. A primary question remains unclear: Does the NAF always rupture in episodic bursts? To address this question we have reanalysed selected pre-existing paleoseismic investigations (PIs), from along the NAF, using Bayesian statistical modelling to determine a standardised record of the temporal probability distribution of earthquakes. A wealth of paleoseismic records have accumulated over recent years concerning the NAF although sadly much research remains un-published. A significant output of this study is tabulated results from all of the existing published paleoseismic studies on the NAF with recalibration of the radiocarbon ages using standardized methodology and standardized error reporting by determining the earthquake probability rather than using errors associated with individual bounding dates. We followed the approach outlined in Biasi & Weldon (1994) and in Biasi et al. (2002) to calculate the actual probability density distributions for the timing of paleoseismic events and for the recurrence intervals. Our implementation of these algorithms is reasonably fast and yields PDFs that are comparable to but smoother than those obtained by Markov Chain Monte Carlo type simulations (e.g., OxCal, Bronk-Ramsey, 2007). Additionally we introduce three new earthquake records from PIs we have conducted in spatial gaps in the existing data. By presenting all of this earthquake data we hope to focus further studies and help to define the distribution of earthquake risk. Because of the long historical record of earthquakes in Turkey, we can begin to address some

  1. Sun, Moon and Earthquakes

    NASA Astrophysics Data System (ADS)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  2. The effects of the Yogyakarta earthquake at LUSI mud volcano, Indonesia

    NASA Astrophysics Data System (ADS)

    Lupi, M.; Saenger, E. H.; Fuchs, F.; Miller, S. A.

    2013-12-01

    The M6.3 Yogyakarta earthquake shook Central Java on May 27th, 2006. Forty seven hours later, hot mud outburst at the surface near Sidoarjo, approximately 250 km from the earthquake epicentre. The mud eruption continued and originated LUSI, the youngest mud volcanic system on earth. Since the beginning of the eruption, approximately 30,000 people lost their homes and 13 people died due to the mud flooding. The causes that initiated the eruption are still debated and are based on different geological observations. The earthquake-triggering hypothesis is supported by the evidence that at the time of the earthquake ongoing drilling operations experienced a loss of the drilling mud downhole. In addition, the eruption of the mud began only 47 hours after the Yogyakarta earthquake and the mud reached the surface at different locations aligned along the Watukosek fault, a strike-slip fault upon which LUSI resides. Moreover, the Yogyakarta earthquake also affected the volcanic activity of Mt. Semeru, located as far as Lusi from the epicentre of the earthquake. However, the drilling-triggering hypothesis points out that the earthquake was too far from LUSI for inducing relevant stress changes at depth and highlight how upwelling fluids that reached the surface first emerged only 200 m far from the drilling rig that was operative at the time. Hence, was LUSI triggered by the earthquake or by drilling operations? We conducted a seismic wave propagation study on a geological model based on vp, vs, and density values for the different lithologies and seismic profiles of the crust beneath LUSI. Our analysis shows compelling evidence for the effects produced by the passage of seismic waves through the geological formations and highlights the importance of the overall geological structure that focused and reflected incoming seismic energy.

  3. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so

  4. Multifractal Approach to Time Clustering of Earthquakes. Application to Mt. Vesuvio Seismicity

    NASA Astrophysics Data System (ADS)

    Codano, C.; Alonzo, M. L.; Vilardo, G.

    The clustering structure of the Vesuvian earthquakes occurring is investigated by means of statistical tools: the inter-event time distribution, the running mean and the multifractal analysis. The first cannot clearly distinguish between a Poissonian process and a clustered one due to the difficulties of clearly distinguishing between an exponential distribution and a power law one. The running mean test reveals the clustering of the earthquakes, but looses information about the structure of the distribution at global scales. The multifractal approach can enlighten the clustering at small scales, while the global behaviour remains Poissonian. Subsequently the clustering of the events is interpreted in terms of diffusive processes of the stress in the earth crust.

  5. Earthquake statistics, spatiotemporal distribution of foci and source mechanisms - a key to understanding of the West Bohemia/Vogtland earthquake swarms

    NASA Astrophysics Data System (ADS)

    Horálek, Josef; Čermáková, Hana; Fischer, Tomáš

    2016-04-01

    Earthquake swarms are sequences of numerous events closely clustered in space and time and do not have a single dominant mainshock. A few of the largest events in a swarm reach similar magnitudes and usually occur throughout the course of the earthquake sequence. These attributes differentiate earthquake swarms from ordinary mainshock-aftershock sequences. Earthquake swarms occur worldwide, in diverse geological units. The swarms typically accompany volcanic activity at margins of the tectonic plate but also occur in intracontinental areas where strain from tectonic-plate movement is small. The origin of earthquake swarms is still unclear. The swarms typically occur at the plate margins but also in intracontinental areas. West Bohemia-Vogtland represents one of the most active intraplate earthquake-swarm areas in Europe. It is characterised by a frequent reoccurrence of ML < 4.0 swarms and by high activity of crustal fluids. West Bohemia-Vogtland is one of the most active intraplate earthquake-swarm areas in Europe which also exhibits high activity of crustal fluids. The Nový Kostel focal zone (NK) dominates the recent seismicity, there were swarms in 1997, 2000, 2008 and 20011, and a striking non-swarm activity (mainshock-aftershock sequences) up to magnitude ML= 4.5 in May to August 2014. The swarms and the 2014 mainshock-aftershock sequences are located close to each other at depths between 6 and 13 km. The frequency-magnitude distributions of all the swarms show bimodal-like character: the most events obey the b-value = 1.0 distribution, but a group of the largest events depart significantly from it. All the ML > 2.8 swarm events are located in a few dense clusters which implies step by step rupturing of one or a few asperities during the individual swarms. The source mechanism patters (moment-tensor description, MT) of the individual swarms indicate several families of the mechanisms, which fit well geometry of respective fault segments. MTs of the most

  6. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    USGS Publications Warehouse

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  7. Near real-time aftershock hazard maps for earthquakes

    NASA Astrophysics Data System (ADS)

    McCloskey, J.; Nalbant, S. S.

    2009-04-01

    Stress interaction modelling is routinely used to explain the spatial relationships between earthquakes and their aftershocks. On 28 October 2008 a M6.4 earthquake occurred near the Pakistan-Afghanistan border killing several hundred and causing widespread devastation. A second M6.4 event occurred 12 hours later 20km to the south east. By making some well supported assumptions concerning the source event and the geometry of any likely triggered event it was possible to map those areas most likely to experience further activity. Using Google earth, it would further have been possible to identify particular settlements in the source area which were particularly at risk and to publish their locations globally within about 3 hours of the first earthquake. Such actions could have significantly focused the initial emergency response management. We argue for routine prospective testing of such forecasts and dialogue between social and physical scientists and emergency response professionals around the practical application of these techniques.

  8. The Pacific Tsunami Warning Center's Response to the Tohoku Earthquake and Tsunami

    NASA Astrophysics Data System (ADS)

    Weinstein, S. A.; Becker, N. C.; Shiro, B.; Koyanagi, K. K.; Sardina, V.; Walsh, D.; Wang, D.; McCreery, C. S.; Fryer, G. J.; Cessaro, R. K.; Hirshorn, B. F.; Hsu, V.

    2011-12-01

    The largest Pacific basin earthquake in 47 years, and also the largest magnitude earthquake since the Sumatra 2004 earthquake, struck off of the east coast of the Tohoku region of Honshu, Japan at 5:46 UTC on 11 March 2011. The Tohoku earthquake (Mw 9.0) generated a massive tsunami with runups of up to 40m along the Tohoku coast. The tsunami waves crossed the Pacific Ocean causing significant damage as far away as Hawaii, California, and Chile, thereby becoming the largest, most destructive tsunami in the Pacific Basin since 1960. Triggers on the seismic stations at Erimo, Hokkaido (ERM) and Matsushiro, Honshu (MAJO), alerted Pacific Tsunami Warning Center (PTWC) scientists 90 seconds after the earthquake began. Four minutes after its origin, and about one minute after the earthquake's rupture ended, PTWC issued an observatory message reporting a preliminary magnitude of 7.5. Eight minutes after origin time, the Japan Meteorological Agency (JMA) issued its first international tsunami message in its capacity as the Northwest Pacific Tsunami Advisory Center. In accordance with international tsunami warning system protocols, PTWC then followed with its first international tsunami warning message using JMA's earthquake parameters, including an Mw of 7.8. Additional Mwp, mantle wave, and W-phase magnitude estimations based on the analysis of later-arriving seismic data at PTWC revealed that the earthquake magnitude reached at least 8.8, and that a destructive tsunami would likely be crossing the Pacific Ocean. The earthquake damaged the nearest coastal sea-level station located 90 km from the epicenter in Ofunato, Japan. The NOAA DART sensor situated 600 km off the coast of Sendai, Japan, at a depth of 5.6 km recorded a tsunami wave amplitude of nearly two meters, making it by far the largest tsunami wave ever recorded by a DART sensor. Thirty minutes later, a coastal sea-level station at Hanasaki, Japan, 600 km from the epicenter, recorded a tsunami wave amplitude of

  9. Real-Time Integration of Positioning and Accelerometer Data for Early Earthquake Warning on Canada's West Coast

    NASA Astrophysics Data System (ADS)

    Biffard, B.; Rosenberger, A.; Pirenne, B.; Valenzuela, M.; MacArthur, M.

    2017-12-01

    Ocean Networks Canada (ONC) operates ocean and coastal observatories on all three of Canada's coasts, and more particularly across the Cascadia subduction zone. The data are acquired, parsed, calibrated and archived by ONC's data management system (Oceans 2.0), with real-time event detection, reaction and access capabilities. As such, ONC is in a unique position to develop early warning systems for earthquakes, near- and far-field tsunamis and other events. ONC is leading the development of a system to alert southwestern British Columbia of an impending Cascadia subduction zone earthquake on behalf of the provincial government and with the support of the Canadian Federal Government. Similarly to other early earthquake warning systems, an array of accelerometers is used to detect the initial earthquake p-waves. This can provide 5-60 seconds of warning to subscribers who can then take action, such as stopping trains and surgeries, closing valves, taking cover, etc. To maximize the detection capability and the time available to react to a notification, instruments are placed both underwater and on land on Vancouver Island. A novel feature of ONC's system is, for land-based sites, the combination of real-time satellite positioning (GNSS) and accelerometer data in the calculations to improve earthquake intensity estimates. This results in higher accuracy, dynamic range and responsiveness than either type of sensor is capable of alone. P-wave detections and displacement data are sent from remote stations to a data centre that must calculate epicentre locations and magnitude. The latter are then delivered to subscribers with client software that, given their position, will calculate arrival time and intensity. All of this must occur with very high standards for latency, reliability and accuracy.

  10. Origin of Human Losses due to the Emilia Romagna, Italy, M5.9 Earthquake of 20 May 2012 and their Estimate in Real Time

    NASA Astrophysics Data System (ADS)

    Wyss, M.

    2012-12-01

    Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is

  11. CISN ShakeAlert Earthquake Early Warning System Monitoring Tools

    NASA Astrophysics Data System (ADS)

    Henson, I. H.; Allen, R. M.; Neuhauser, D. S.

    2015-12-01

    CISN ShakeAlert is a prototype earthquake early warning system being developed and tested by the California Integrated Seismic Network. The system has recently been expanded to support redundant data processing and communications. It now runs on six machines at three locations with ten Apache ActiveMQ message brokers linking together 18 waveform processors, 12 event association processes and 4 Decision Module alert processes. The system ingests waveform data from about 500 stations and generates many thousands of triggers per day, from which a small portion produce earthquake alerts. We have developed interactive web browser system-monitoring tools that display near real time state-of-health and performance information. This includes station availability, trigger statistics, communication and alert latencies. Connections to regional earthquake catalogs provide a rapid assessment of the Decision Module hypocenter accuracy. Historical performance can be evaluated, including statistics for hypocenter and origin time accuracy and alert time latencies for different time periods, magnitude ranges and geographic regions. For the ElarmS event associator, individual earthquake processing histories can be examined, including details of the transmission and processing latencies associated with individual P-wave triggers. Individual station trigger and latency statistics are available. Detailed information about the ElarmS trigger association process for both alerted events and rejected events is also available. The Google Web Toolkit and Map API have been used to develop interactive web pages that link tabular and geographic information. Statistical analysis is provided by the R-Statistics System linked to a PostgreSQL database.

  12. Seismicity around Parkfield correlates with static shear stress changes following the 2003 Mw6.5 San Simeon earthquake

    USGS Publications Warehouse

    Meng, Xiaoteng; Peng, Zhigang; Hardebeck, Jeanne L.

    2013-01-01

    Earthquakes trigger other earthquakes, but the physical mechanism of the triggering is currently debated. Most studies of earthquake triggering rely on earthquakes listed in catalogs, which are known to be incomplete around the origin times of large earthquakes and therefore missing potentially triggered events. Here we apply a waveform matched-filter technique to systematically detect earthquakes along the Parkfield section of the San Andreas Fault from 46 days before to 31 days after the nearby 2003 Mw6.5 San Simeon earthquake. After removing all possible false detections, we identify ~8 times more earthquakes than in the Northern California Seismic Network catalog. The newly identified events along the creeping section of the San Andreas Fault show a statistically significant decrease following the San Simeon main shock, which correlates well with the negative static stress changes (i.e., stress shadow) cast by the main shock. In comparison, the seismicity rate around Parkfield increased moderately where the static stress changes are positive. The seismicity rate changes correlate well with the static shear stress changes induced by the San Simeon main shock, suggesting a low friction in the seismogenic zone along the Parkfield section of the San Andreas Fault.

  13. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  14. Sensing the earthquake

    NASA Astrophysics Data System (ADS)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  15. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  16. Shallow moonquakes - How they compare with earthquakes

    NASA Technical Reports Server (NTRS)

    Nakamura, Y.

    1980-01-01

    Of three types of moonquakes strong enough to be detectable at large distances - deep moonquakes, meteoroid impacts and shallow moonquakes - only shallow moonquakes are similar in nature to earthquakes. A comparison of various characteristics of moonquakes with those of earthquakes indeed shows a remarkable similarity between shallow moonquakes and intraplate earthquakes: (1) their occurrences are not controlled by tides; (2) they appear to occur in locations where there is evidence of structural weaknesses; (3) the relative abundances of small and large quakes (b-values) are similar, suggesting similar mechanisms; and (4) even the levels of activity may be close. The shallow moonquakes may be quite comparable in nature to intraplate earthquakes, and they may be of similar origin.

  17. Flashsourcing or Real-Time Mapping of Earthquake Effects from Instantaneous Analysis of the EMSC Website Traffic

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Gilles, S.; Roussel, F.

    2010-12-01

    Earthquake response efforts are often hampered by the lack of timely and reliable information on the earthquake impact. Rapid detection of damaging events and production of actionable information for emergency response personnel within minutes of their occurrence are essential to mitigate the human impacts from earthquakes. Economically developed countries deploy dense real-time accelerometric networks in regions of high seismic hazard to constrain scenarios from in-situ data. A cheaper alternative, named flashsourcing, is based on implicit data derived from the analysis of the visits by eyewitnesses, the first informed persons, to websites offering real time earthquake information. We demonstrated in 2004 that widely felt earthquakes generate a surge of traffic, known as a flashcrowd, caused by people rushing websites such as the EMSC’s to find information about the shaking they have just felt. With detailed traffic analysis and metrics, widely felt earthquakes can be detected within one minute of the earthquake’s occurrence. In addition, the geographical area where the earthquake has been felt is automatically mapped within 5 minutes by statistically analysing the IP locations of the eyewitnesses, without using any seismological data. These results have been validated on more than 150 earthquakes by comparing the automatic felt maps with the felt area derived from macroseismic questionnaires. In practice, the felt maps are available before the first location is published by the EMSC. We have also demonstrated the capacity to rapidly detect and map areas of widespread damage by detecting when visitors suddenly end their sessions on the website en masse. This has been successfully applied to time and map the massive power failure which plunged a large part of Chile into darkness in March, 2010. If damage to power and communication lines cannot be discriminated from damage to buildings, the absence of sudden session closures precludes the possibility of heavy

  18. Space-Time Earthquake Prediction: The Error Diagrams

    NASA Astrophysics Data System (ADS)

    Molchan, G.

    2010-08-01

    The quality of earthquake prediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets { n, τ w } in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the ( n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the ( n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of ( n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods.

  19. Combining Real-time Seismic and Geodetic Data to Improve Rapid Earthquake Information

    NASA Astrophysics Data System (ADS)

    Murray, M. H.; Neuhauser, D. S.; Gee, L. S.; Dreger, D. S.; Basset, A.; Romanowicz, B.

    2002-12-01

    The Berkeley Seismological Laboratory operates seismic and geodetic stations in the San Francisco Bay area and northern California for earthquake and deformation monitoring. The seismic systems, part of the Berkeley Digital Seismic Network (BDSN), include strong motion and broadband sensors, and 24-bit dataloggers. The data from 20 GPS stations, part of the Bay Area Regional Deformation (BARD) network of more than 70 stations in northern California, are acquired in real-time. We have developed methods to acquire GPS data at 12 stations that are collocated with the seismic systems using the seismic dataloggers, which have large on-site data buffer and storage capabilities, merge it with the seismic data stream in MiniSeed format, and continuously stream both data types using reliable frame relay and/or radio modem telemetry. Currently, the seismic data are incorporated into the Rapid Earthquake Data Integration (REDI) project to provide notification of earthquake magnitude, location, moment tensor, and strong motion information for hazard mitigation and emergency response activities. The geodetic measurements can provide complementary constraints on earthquake faulting, including the location and extent of the rupture plane, unambiguous resolution of the nodal plane, and distribution of slip on the fault plane, which can be used, for example, to refine strong motion shake maps. We are developing methods to rapidly process the geodetic data to monitor transient deformation, such as coseismic station displacements, and for combining this information with the seismic observations to improve finite-fault characterization of large earthquakes. The GPS data are currently processed at hourly intervals with 2-cm precision in horizontal position, and we are beginning a pilot project in the Bay Area in collaboration with the California Spatial Reference Center to do epoch-by-epoch processing with greater precision.

  20. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  1. The nature of earthquake prediction

    USGS Publications Warehouse

    Lindh, A.G.

    1991-01-01

    Earthquake prediction is inherently statistical. Although some people continue to think of earthquake prediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear for at least a decade that this is an unrealistic and unreasonable definition. the reality is that earthquake prediction starts from the long-term forecasts of place and magnitude, with very approximate time constraints, and progresses, at least in principle, to a gradual narrowing of the time window as data and understanding permit. Primitive long-term forecasts are clearly possible at this time on a few well-characterized fault systems. Tightly focuses monitoring experiments aimed at short-term prediction are already underway in Parkfield, California, and in the Tokai region in Japan; only time will tell how much progress will be possible. 

  2. THE MAY 23TH 2007 GULF OF MEXICO EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Yamamoto, J.; Jimenez, Z.

    2009-12-01

    On the 23th of May 2007 at 14:09 local time (19:09 UT) an insolated earthquake of local magnitude 5.2 occurred offshore northern Veracruz in the Gulf of Mexico. The seismic focus was located using local and regional data at 20.11° N, 97.38° W and 7.8 km depth at 175 km distance from Tuxpan a city of 134,394 inhabitants. The earthquake was widely felt along the costal states of southern Tamaulipas and Veracruz in which several schools and public buildings were evacuated. Neither Laguna Verde nuclear plant, located approximately 245 km from the epicenter, nor PEMEX petroleum company reported damage. First-motion data indicates that the rupture occurred as strike slip faulting along two possible planes, one oriented roughly north-south and the other east-west. In the present paper a global analysis of the earthquake is made to elucidate its origin and possible correlation with known geotectonic features of the region.

  3. Analysis in natural time domain of geoelectric time series monitored prior two strong earthquakes occurred in Mexico

    NASA Astrophysics Data System (ADS)

    Ramírez-Rojas, A.; Flores-Marquez, L. E.

    2009-12-01

    The short-time prediction of seismic phenomena is currently an important problem in the scientific community. In particular, the electromagnetic processes associated with seismic events take in great interest since the VAN method was implemented. The most important features of this methodology are the seismic electrical signals (SES) observed prior to strong earthquakes. SES has been observed in the electromagnetic series linked to EQs in Greece, Japan and Mexico. By mean of the so-called natural time domain, introduced by Varotsos et al. (2001), they could characterize signals of dichotomic nature observed in different systems, like SES and ionic current fluctuations in membrane channels. In this work we analyze SES observed in geoelectric time series monitored in Guerrero, México. Our analysis concern with two strong earthquakes occurred, on October 24, 1993 (M=6.6) and September 14, 1995 (M=7.3). The time series of the first one displayed a seismic electric signal six days before the main shock and for the second case the time series displayed dichotomous-like fluctuations some months before the EQ. In this work we present the first results of the analysis in natural time domain for the two cases which seems to be agreeing with the results reported by Varotsos. P. Varotsos, N. Sarlis, and E. Skordas, Practica of the Athens Academy 76, 388 (2001).

  4. Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.

    2016-12-01

    During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.

  5. GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone

    USGS Publications Warehouse

    Stuart, W.D.

    2001-01-01

    Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.

  6. Correlation between elastic energy density and deep earthquakes distribution

    NASA Astrophysics Data System (ADS)

    Gunawardana, P. M.; Morra, G.

    2017-05-01

    The mechanism at the origin of the earthquakes below 30 km remains elusive as these events cannot be explained by brittle frictional processes. In this work we focus on the global total distribution of earthquakes frequency vs. depth from ∼50 km to 670 km depth. We develop a numerical model of self-driven subduction by solving the non-homogeneous Stokes equation using the ;Particle in cell method; in combination with a conservative finite difference scheme, here solved for the first time using Python and NumPy only. We show that most of the elastic energy is stored in the slab core and that it is strongly correlated with the earthquake frequency-depth distribution for a wide range of lithosphere and lithosphere-core viscosities. According to our results, we suggest that 1) slab bending at the bottom of the upper mantle causes the peak of the earthquake frequency-depth distribution that is observed at mantle transition depth; 2) the presence of a high viscous stiff core inside the lithosphere generates an elastic energy distribution that fits better with the exponential decay that is observed at intermediate depth.

  7. Earthquakes, September-October 1993

    USGS Publications Warehouse

    Person, W.J.

    1993-01-01

    The fatalities in the United States were caused by two earthquakes in southern Oregon on September 21. These earthquakes, both with magnitude 6.0 and separated in time by about 2 hrs, led to the deaths of two people. One of these deaths was apparently due to a heart attack induced by the earthquake

  8. Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.

    2011-01-01

    On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.

  9. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes

  10. The origin of high frequency radiation in earthquakes and the geometry of faulting

    NASA Astrophysics Data System (ADS)

    Madariaga, R.

    2004-12-01

    In a seminal paper of 1967 Kei Aki discovered the scaling law of earthquake spectra and showed that, among other things, the high frequency decay was of type omega-squared. This implies that high frequency displacement amplitudes are proportional to a characteristic length of the fault, and radiated energy scales with the cube of the fault dimension, just like seismic moment. Later in the seventies, it was found out that a simple explanation for this frequency dependence of spectra was that high frequencies were generated by stopping phases, waves emitted by changes in speed of the rupture front as it propagates along the fault, but this did not explain the scaling of high frequency waves with fault length. Earthquake energy balance is such that, ignoring attenuation, radiated energy is the change in strain energy minus energy released for overcoming friction. Until recently the latter was considered to be a material property that did not scale with fault size. Yet, in another classical paper Aki and Das estimated in the late 70s that energy release rate also scaled with earthquake size, because earthquakes were often stopped by barriers or changed rupture speed at them. This observation was independently confirmed in the late 90s by Ide and Takeo and Olsen et al who found that energy release rates for Kobe and Landers were in the order of a MJ/m2, implying that Gc necessarily scales with earthquake size, because if this was a material property, small earthquakes would never occur. Using both simple analytical and numerical models developed by Addia-Bedia and Aochi and Madariaga, we examine the consequence of these observations for the scaling of high frequency waves with fault size. We demonstrate using some classical results by Kostrov, Husseiny and Freund that high frequency energy flow measures energy release rate and is generated when ruptures change velocity (both direction and speed) at fault kinks or jogs. Our results explain why super shear ruptures are

  11. Retrospective stress-forecasting of earthquakes

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Crampin, Stuart

    2015-04-01

    Observations of changes in azimuthally varying shear-wave splitting (SWS) above swarms of small earthquakes monitor stress-induced changes to the stress-aligned vertical microcracks pervading the upper crust, lower crust, and uppermost ~400km of the mantle. (The microcracks are intergranular films of hydrolysed melt in the mantle.) Earthquakes release stress, and an appropriate amount of stress for the relevant magnitude must accumulate before each event. Iceland is on an extension of the Mid-Atlantic Ridge, where two transform zones, uniquely run onshore. These onshore transform zones provide semi-continuous swarms of small earthquakes, which are the only place worldwide where SWS can be routinely monitored. Elsewhere SWS must be monitored above temporally-active occasional swarms of small earthquakes, or in infrequent SKS and other teleseismic reflections from the mantle. Observations of changes in SWS time-delays are attributed to stress-induced changes in crack aspect-ratios allowing stress-accumulation and stress-relaxation to be identified. Monitoring SWS in SW Iceland in 1988, stress-accumulation before an impending earthquake was recognised and emails were exchanged between the University of Edinburgh (EU) and the Iceland Meteorological Office (IMO). On 10th November 1988, EU emailed IMO that a M5 earthquake could occur soon on a seismically-active fault plane where seismicity was still continuing following a M5.1 earthquake six-months earlier. Three-days later, IMO emailed EU that a M5 earthquake had just occurred on the specified fault-plane. We suggest this is a successful earthquake stress-forecast, where we refer to the procedure as stress-forecasting earthquakes as opposed to predicting or forecasting to emphasise the different formalism. Lack of funds has prevented us monitoring SWS on Iceland seismograms, however, we have identified similar characteristic behaviour of SWS time-delays above swarms of small earthquakes which have enabled us to

  12. Modelling the Time Dependence of Frequency Content of Long-period Volcanic Earthquakes

    NASA Astrophysics Data System (ADS)

    Jousset, P.; Neuberg, J. W.

    2001-12-01

    Broad-band seismic networks provide a powerfull tool for the observation and analysis of volcanic earthquakes. The amplitude spectrogram allows us to follow the frequency content of these signals with time. Observed amplitude spectrograms of long-period volcanic earthquakes display distinct spectral lines sometimes varying by several Hertz over time spans of minutes to hours. We first present several examples associated with various phases of volcanic activity at Soufrière Hills volcano, Montserrat. Then, we present and discuss two mechanisms to explain such frequency changes in the spectrograms: (i) change of physical properties within the magma and, (ii) change in the triggering frequency of repeated sources within the conduit. We use 2D and 3D finite-difference modelling methods to compute the propagation of seismic waves in simplified volcanic structures: (i) we model the gliding spectral lines by introducing continuously changing magma properties during the wavefield computation; (ii) we explore the resulting pressure distribution within the conduit and its potential role in triggering further events. We obtain constraints on both amplitude and time-scales for changes of magma properties that are required to model gliding lines in amplitude spectrograms.

  13. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  14. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  15. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    , respectively, as a function of the length of the P-wave window. The entire rupture process of the Tohoku earthquake lasted more than 120 seconds, as shown by the source time functions obtained by several authors. When a 3 second window is used to measure Pd and τc the result is an obvious underestimation of the event size and final PGV. However, as the time window increases up to 27-30 seconds, the measured values of Pd and τc become comparable with those expected for a magnitude M≥8.5 earthquake, according to the τc vs. M and the PGV vs. Pd relationships obtained in a previous work. Since we did not observe any saturation effect for the predominant period and peak displacement measured within a P-wave, 30-seconds window, we infer that, at least from a theoretical point of view, the estimation of earthquake damage potential through the early warning parameters is still feasible for large events, provided that a longer time window is used for parameter measurement. The off-line analysis of the Tohoku event records shows that reliable estimations of the damage potential could have been obtained 40-50 seconds after the origin time, by updating the measurements of the early warning parameters in progressively enlarged P-wave time windows from 3 to 30 seconds.

  16. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  17. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    NASA Astrophysics Data System (ADS)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  18. Analysis of pre-earthquake ionospheric anomalies before the global M = 7.0+ earthquakes in 2010

    NASA Astrophysics Data System (ADS)

    Yao, Y. B.; Chen, P.; Zhang, S.; Chen, J. J.; Yan, F.; Peng, W. F.

    2012-03-01

    The pre-earthquake ionospheric anomalies that occurred before the global M = 7.0+ earthquakes in 2010 are investigated using the total electron content (TEC) from the global ionosphere map (GIM). We analyze the possible causes of the ionospheric anomalies based on the space environment and magnetic field status. Results show that some anomalies are related to the earthquakes. By analyzing the time of occurrence, duration, and spatial distribution of these ionospheric anomalies, a number of new conclusions are drawn, as follows: earthquake-related ionospheric anomalies are not bound to appear; both positive and negative anomalies are likely to occur; and the earthquake-related ionospheric anomalies discussed in the current study occurred 0-2 days before the associated earthquakes and in the afternoon to sunset (i.e. between 12:00 and 20:00 local time). Pre-earthquake ionospheric anomalies occur mainly in areas near the epicenter. However, the maximum affected area in the ionosphere does not coincide with the vertical projection of the epicenter of the subsequent earthquake. The directions deviating from the epicenters do not follow a fixed rule. The corresponding ionospheric effects can also be observed in the magnetically conjugated region. However, the probability of the anomalies appearance and extent of the anomalies in the magnetically conjugated region are smaller than the anomalies near the epicenter. Deep-focus earthquakes may also exhibit very significant pre-earthquake ionospheric anomalies.

  19. Corrigendum: Earthquakes triggered by silent slip events on Kīlauea volcano, Hawaii

    USGS Publications Warehouse

    Segall, Paul; Desmarais, Emily K.; Shelly, David; Miklius, Asta; Cervelli, Peter

    2006-01-01

    There was a plotting error in Fig. 1 that inadvertently displays earthquakes for the incorrect time interval. The location of earthquakes during the two-day-long slow-slip event of January 2005 are shown here in the corrected Fig. 1. Because the incorrect locations were also used in the Coulomb stress-change (CSC) calculation, the error could potentially have biased our interpretation of the depth of the slow-slip event, although in fact it did not. Because nearly all of the earthquakes, both background and triggered, are landward of the slow-slip event and at similar depths (6.5–8.5 km), the impact on the CSC calculations is negligible (Fig. 2; compare with Fig. 4 in original paper). The error does not alter our conclusion that the triggered events during the January 2005 slow-slip event were located on a subhorizontal plane at a depth of 7.5  1 km. This is therefore the most likely depth of the slow-slip events. We thank Cecily J. Wolfe for pointing out the error in the original Fig. 1.

  20. The 25 October 2010 Mentawai tsunami earthquake, from real-time discriminants, finite-fault rupture, and tsunami excitation

    USGS Publications Warehouse

    Newman, Andrew V.; Hayes, Gavin P.; Wei, Yong; Convers, Jaime

    2011-01-01

    The moment magnitude 7.8 earthquake that struck offshore the Mentawai islands in western Indonesia on 25 October 2010 created a locally large tsunami that caused more than 400 human causalities. We identify this earthquake as a rare slow-source tsunami earthquake based on: 1) disproportionately large tsunami waves; 2) excessive rupture duration near 125 s; 3) predominantly shallow, near-trench slip determined through finite-fault modeling; and 4) deficiencies in energy-to-moment and energy-to-duration-cubed ratios, the latter in near-real time. We detail the real-time solutions that identified the slow-nature of this event, and evaluate how regional reductions in crustal rigidity along the shallow trench as determined by reduced rupture velocity contributed to increased slip, causing the 5–9 m local tsunami runup and observed transoceanic wave heights observed 1600 km to the southeast.

  1. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  2. SalanderMaps: A rapid overview about felt earthquakes through data mining of web-accesses

    NASA Astrophysics Data System (ADS)

    Kradolfer, Urs

    2013-04-01

    While seismological observatories detect and locate earthquakes based on measurements of the ground motion, they neither know a priori whether an earthquake has been felt by the public nor is it known, where it has been felt. Such information is usually gathered by evaluating feedback reported by the public through on-line forms on the web. However, after a felt earthquake in Switzerland, many people visit the webpages of the Swiss Seismological Service (SED) at the ETH Zurich and each such visit leaves traces in the logfiles on our web-servers. Data mining techniques, applied to these logfiles and mining publicly available data bases on the internet open possibilities to obtain previously unknown information about our virtual visitors. In order to provide precise information to authorities and the media, it would be desirable to rapidly know from which locations these web-accesses origin. The method 'Salander' (Seismic Activitiy Linked to Area codes - Nimble Detection of Earthquake Rumbles) will be introduced and it will be explained, how the IP-addresses (each computer or router directly connected to the internet has a unique IP-address; an example would be 129.132.53.5) of a sufficient amount of our virtual visitors were linked to their geographical area. This allows us to unprecedentedly quickly know whether and where an earthquake was felt in Switzerland. It will also be explained, why the method Salander is superior to commercial so-called geolocation products. The corresponding products of the Salander method, animated SalanderMaps, which are routinely generated after each earthquake with a magnitude of M>2 in Switzerland (http://www.seismo.ethz.ch/prod/salandermaps/, available after March 2013), demonstrate how the wavefield of earthquakes propagates through Switzerland and where it was felt. Often, such information is available within less than 60 seconds after origin time, and we always get a clear picture within already five minutes after origin time

  3. The ``exceptional'' earthquake of 3 January 1117 in the Verona area (northern Italy): A critical time review and detection of two lost earthquakes (lower Germany and Tuscany)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Comastri, Alberto; Boschi, Enzo

    2005-12-01

    In the seismological literature the 3 January 1117 earthquake represents an interesting case study, both for the sheer size of the area in which that event is recorded by the monastic sources of the 12th century, and for the amount of damage mentioned. The 1117 event has been added to the earthquake catalogues of up to five European countries (Italy, France, Belgium, Switzerland, the Iberian peninsula), and it is the largest historical earthquake for northern Italy. We have analyzed the monastic time system in the 12th century and, by means of a comparative analysis of the sources, have correlated the two shocks mentioned (in the night and in the afternoon of 3 January) to territorial effects, seeking to make the overall picture reported for Europe more consistent. The connection between the linguistic indications and the localization of the effects has allowed us to shed light, with a reasonable degree of approximation, upon two previously little known earthquakes, probably generated by a sequence of events. A first earthquake in lower Germany (I0 (epicentral intensity) VII-VIII MCS (Mercalli, Cancani, Sieberg), M 6.4) preceded the far more violent one in northern Italy (Verona area) by about 12-13 hours. The second event is the one reported in the literature. We have put forward new parameters for this Veronese earthquake (I0 IX MCS, M 7.0). A third earthquake is independently recorded in the northwestern area of Tuscany (Imax VII-VIII MCS), but for the latter event the epicenter and magnitude cannot be evaluated.

  4. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    NASA Astrophysics Data System (ADS)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  5. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    USGS Publications Warehouse

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  6. Real time numerical shake prediction incorporating attenuation structure: a case for the 2016 Kumamoto Earthquake

    NASA Astrophysics Data System (ADS)

    Ogiso, M.; Hoshiba, M.; Shito, A.; Matsumoto, S.

    2016-12-01

    Needless to say, heterogeneous attenuation structure is important for ground motion prediction, including earthquake early warning, that is, real time ground motion prediction. Hoshiba and Ogiso (2015, AGU Fall meeting) showed that the heterogeneous attenuation and scattering structure will lead to earlier and more accurate ground motion prediction in the numerical shake prediction scheme proposed by Hoshiba and Aoki (2015, BSSA). Hoshiba and Ogiso (2015) used assumed heterogeneous structure, and we discuss the effect of them in the case of 2016 Kumamoto Earthquake, using heterogeneous structure estimated by actual observation data. We conducted Multiple Lapse Time Window Analysis (Hoshiba, 1993, JGR) to the seismic stations located on western part of Japan to estimate heterogeneous attenuation and scattering structure. The characteristics are similar to the previous work of Carcole and Sato (2010, GJI), e.g. strong intrinsic and scattering attenuation around the volcanoes located on the central part of Kyushu, and relatively weak heterogeneities in the other area. Real time ground motion prediction simulation for the 2016 Kumamoto Earthquake was conducted using the numerical shake prediction scheme with 474 strong ground motion stations. Comparing the snapshot of predicted and observed wavefield showed a tendency for underprediction around the volcanic area in spite of the heterogeneous structure. These facts indicate the necessity of improving the heterogeneous structure for the numerical shake prediction scheme.In this study, we used the waveforms of Hi-net, K-NET, KiK-net stations operated by the NIED for estimating structure and conducting ground motion prediction simulation. Part of this study was supported by the Earthquake Research Institute, the University of Tokyo cooperative research program and JSPS KAKENHI Grant Number 25282114.

  7. Limiting the Effects of Earthquake Shaking on Gravitational-Wave Interferometers

    NASA Astrophysics Data System (ADS)

    Perry, M. R.; Earle, P. S.; Guy, M. R.; Harms, J.; Coughlin, M.; Biscans, S.; Buchanan, C.; Coughlin, E.; Fee, J.; Mukund, N.

    2016-12-01

    Second-generation ground-based gravitational wave interferometers such as the Laser Interferometer Gravitational-wave Observatory (LIGO) are susceptible to high-amplitude waves from teleseismic events, which can cause astronomical detectors to fall out of mechanical lock (lockloss). This causes the data to be useless for gravitational wave detection around the time of the seismic arrivals and for several hours thereafter while the detector stabilizes enough to return to the locked state. The down time can be reduced if advance warning of impending shaking is received and the impact is suppressed in the isolation system with the goal of maintaining lock even at the expense of increased instrumental noise. Here we describe an early warning system for modern gravitational-wave observatories. The system relies on near real-time earthquake alerts provided by the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA). Hypocenter and magnitude information is typically available within 5 to 20 minutes of the origin time of significant earthquakes, generally before the arrival of high-amplitude waves from these teleseisms at LIGO. These alerts are used to estimate arrival times and ground velocities at the gravitational wave detectors. In general, 94% of the predictions for ground-motion amplitude are within a factor of 5 of measured values. The error in both arrival time and ground-motion prediction introduced by using preliminary, rather than final, hypocenter and magnitude information is minimal with about 90% of the events falling within a factor of 2 of the final predicted value. By using a Machine Learning Algorithm, we develop a lockloss prediction model that calculates the probability that a given earthquake will prevent a detector from taking data. Our initial results indicate that by using detector control configuration changes, we could save lockloss from 40-100 earthquake events in a 6-month time-period.

  8. Tweeting Earthquakes using TensorFlow

    NASA Astrophysics Data System (ADS)

    Casarotti, E.; Comunello, F.; Magnoni, F.

    2016-12-01

    The use of social media is emerging as a powerful tool for disseminating trusted information about earthquakes. Since 2009, the Twitter account @INGVterremoti provides constant and timely details about M2+ seismic events detected by the Italian National Seismic Network, directly connected with the seismologists on duty at Istituto Nazionale di Geofisica e Vulcanologia (INGV). Currently, it updates more than 150,000 followers. Nevertheless, since it provides only the manual revision of seismic parameters, the timing (approximately between 10 and 20 minutes after an event) has started to be under evaluation. Undeniably, mobile internet, social network sites and Twitter in particular require a more rapid and "real-time" reaction. During the last 36 months, INGV tested the tweeting of the automatic detection of M3+ earthquakes, studying the reliability of the information both in term of seismological accuracy that from the point of view of communication and social research. A set of quality parameters (i.e. number of seismic stations, gap, relative error of the location) has been recognized to reduce false alarms and the uncertainty of the automatic detection. We present an experiment to further improve the reliability of this process using TensorFlow™ (an open source software library originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization).

  9. Incorporating Real-time Earthquake Information into Large Enrollment Natural Disaster Course Learning

    NASA Astrophysics Data System (ADS)

    Furlong, K. P.; Benz, H.; Hayes, G. P.; Villasenor, A.

    2010-12-01

    Although most would agree that the occurrence of natural disaster events such as earthquakes, volcanic eruptions, and floods can provide effective learning opportunities for natural hazards-based courses, implementing compelling materials into the large-enrollment classroom environment can be difficult. These natural hazard events derive much of their learning potential from their real-time nature, and in the modern 24/7 news-cycle where all but the most devastating events are quickly out of the public eye, the shelf life for an event is quite limited. To maximize the learning potential of these events requires that both authoritative information be available and course materials be generated as the event unfolds. Although many events such as hurricanes, flooding, and volcanic eruptions provide some precursory warnings, and thus one can prepare background materials to place the main event into context, earthquakes present a particularly confounding situation of providing no warning, but where context is critical to student learning. Attempting to implement real-time materials into large enrollment classes faces the additional hindrance of limited internet access (for students) in most lecture classrooms. In Earth 101 Natural Disasters: Hollywood vs Reality, taught as a large enrollment (150+ students) general education course at Penn State, we are collaborating with the USGS’s National Earthquake Information Center (NEIC) to develop efficient means to incorporate their real-time products into learning activities in the lecture hall environment. Over time (and numerous events) we have developed a template for presenting USGS-produced real-time information in lecture mode. The event-specific materials can be quickly incorporated and updated, along with key contextual materials, to provide students with up-to-the-minute current information. In addition, we have also developed in-class activities, such as student determination of population exposure to severe ground

  10. What Can Sounds Tell Us About Earthquake Interactions?

    NASA Astrophysics Data System (ADS)

    Aiken, C.; Peng, Z.

    2012-12-01

    It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these

  11. It's "Your" Fault!: An Investigation into Earthquakes, Plate Tectonics, and Geologic Time

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2011-01-01

    Earthquakes "have" been in the news of late--from the disastrous 2010 Haitian temblor that killed more than 300,000 people to the March 2011 earthquake and devastating tsunami in Honshu, Japan, to the unexpected August 2011 earthquake in Mineral, Virginia, felt from Alabama to Maine and as far west as Illinois. As expected, these events…

  12. Linking giant earthquakes with the subduction of oceanic fracture zones

    NASA Astrophysics Data System (ADS)

    Landgrebe, T. C.; Müller, R. D.; EathByte Group

    2011-12-01

    Giant subduction earthquakes are known to occur in areas not previously identified as prone to high seismic risk. This highlights the need to better identify subduction zone segments potentially dominated by relatively long (up to 1000 years and more) recurrence times of giant earthquakes. Global digital data sets represent a promising source of information for a multi-dimensional earthquake hazard analysis. We combine the NGDC global Significant Earthquakes database with a global strain rate map, gridded ages of the ocean floor, and a recently produced digital data set for oceanic fracture zones, major aseismic ridges and volcanic chains to investigate the association of earthquakes as a function of magnitude with age of the downgoing slab and convergence rates. We use a so-called Top-N recommendation method, a technology originally developed to search, sort, classify, and filter very large and often statistically skewed data sets on the internet, to analyse the association of subduction earthquakes sorted by magnitude with key parameters. The Top-N analysis is used to progressively assess how strongly particular "tectonic niche" locations (e.g. locations along subduction zones intersected with aseismic ridges or volcanic chains) are associated with sets of earthquakes in sorted order in a given magnitude range. As the total number N of sorted earthquakes is increased, by progressively including smaller-magnitude events, the so-called recall is computed, defined as the number of Top-N earthquakes associated with particular target areas divided by N. The resultant statistical measure represents an intuitive description of the effectiveness of a given set of parameters to account for the location of significant earthquakes on record. We use this method to show that the occurrence of great (magnitude ≥ 8) earthquakes on overriding plate segments is strongly biased towards intersections of oceanic fracture zones with subduction zones. These intersection regions are

  13. The 25 October 2010 Mentawai tsunami earthquake, from real-time discriminants, finite-fault rupture, and tsunami excitation

    USGS Publications Warehouse

    Newman, A.V.; Hayes, G.; Wei, Y.; Convers, J.

    2011-01-01

    The moment magnitude 7.8 earthquake that struck offshore the Mentawai islands in western Indonesia on 25 October 2010 created a locally large tsunami that caused more than 400 human causalities. We identify this earthquake as a rare slow-source tsunami earthquake based on: 1) disproportionately large tsunami waves; 2) excessive rupture duration near 125 s; 3) predominantly shallow, near-trench slip determined through finite-fault modeling; and 4) deficiencies in energy-to-moment and energy-to-duration-cubed ratios, the latter in near-real time. We detail the real-time solutions that identified the slow-nature of this event, and evaluate how regional reductions in crustal rigidity along the shallow trench as determined by reduced rupture velocity contributed to increased slip, causing the 5-9 m local tsunami runup and observed transoceanic wave heights observed 1600 km to the southeast. Copyright 2011 by the American Geophysical Union.

  14. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  15. An open repository of earthquake-triggered ground-failure inventories

    USGS Publications Warehouse

    Schmitt, Robert G.; Tanyas, Hakan; Nowicki Jessee, M. Anna; Zhu, Jing; Biegel, Katherine M.; Allstadt, Kate E.; Jibson, Randall W.; Thompson, Eric M.; van Westen, Cees J.; Sato, Hiroshi P.; Wald, David J.; Godt, Jonathan W.; Gorum, Tolga; Xu, Chong; Rathje, Ellen M.; Knudsen, Keith L.

    2017-12-20

    Earthquake-triggered ground failure, such as landsliding and liquefaction, can contribute significantly to losses, but our current ability to accurately include them in earthquake-hazard analyses is limited. The development of robust and widely applicable models requires access to numerous inventories of ground failures triggered by earthquakes that span a broad range of terrains, shaking characteristics, and climates. We present an openly accessible, centralized earthquake-triggered groundfailure inventory repository in the form of a ScienceBase Community to provide open access to these data with the goal of accelerating research progress. The ScienceBase Community hosts digital inventories created by both U.S. Geological Survey (USGS) and non-USGS authors. We present the original digital inventory files (when available) as well as an integrated database with uniform attributes. We also summarize the mapping methodology and level of completeness as reported by the original author(s) for each inventory. This document describes the steps taken to collect, process, and compile the inventories and the process for adding additional ground-failure inventories to the ScienceBase Community in the future.

  16. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2002

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Moran, Seth C.; Sánchez, John; Estes, Steve; McNutt, Stephen R.; Paskievitch, John

    2003-01-01

    an EARTHWORM detection system. AVO located 7430 earthquakes during 2002 in the vicinity of the monitored volcanoes. This catalog includes: (1) a description of instruments deployed in the field and their locations; (2) a description of earthquake detection, recording, analysis, and data archival systems; (3) a description of velocity models used for earthquake locations; (4) a summary of earthquakes located in 2002; and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, and location quality statistics; daily station usage statistics; and all HYPOELLIPSE files used to determine the earthquake locations in 2002.The AVO seismic network was used to monitor twenty-four volcanoes in real time in 2002. These include Mount Wrangell, Mount Spurr, Redoubt Volcano, Iliamna Volcano, Augustine Volcano, Katmai Volcanic Group (Snowy Mountain, Mount Griggs, Mount Katmai, Novarupta, Trident Volcano, Mount Mageik, Mount Martin), Aniakchak Crater, Mount Veniaminof, Pavlof Volcano, Mount Dutton, Isanotski Peaks, Shishaldin Volcano, Fisher Caldera, Westdahl Peak, Akutan Peak, Makushin Volcano, Great Sitkin Volcano, and Kanaga Volcano (Figure 1). Monitoring highlights in 2002 include an earthquake swarm at Great Sitkin Volcano in May-June; an earthquake swarm near Snowy Mountain in July-September; low frequency (1-3 Hz) tremor and long-period events at Mount Veniaminof in September-October and in December; and continuing volcanogenic seismic swarms at Shishaldin Volcano throughout the year. Instrumentation and data acquisition highlights in 2002 were the installation of a subnetwork on Okmok Volcano, the establishment of telemetry for the Mount Veniaminof subnetwork, and the change in the data acquisition system to an EARTHWORM detection system. AVO located 7430 earthquakes during 2002 in the vicinity of the monitored volcanoes.This catalog includes: (1) a description of instruments deployed in the field and their locations; (2) a

  17. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  18. New streams and springs after the 2014 Mw6.0 South Napa earthquake

    PubMed Central

    Wang, Chi-Yuen; Manga, Michael

    2015-01-01

    Many streams and springs, which were dry or nearly dry before the 2014 Mw6.0 South Napa earthquake, started to flow after the earthquake. A United States Geological Survey stream gauge also registered a coseismic increase in discharge. Public interest was heightened by a state of extreme drought in California. Since the new flows were not contaminated by pre-existing surface water, their composition allowed unambiguous identification of their origin. Following the earthquake we repeatedly surveyed the new flows, collecting data to test hypotheses about their origin. We show that the new flows originated from groundwater in nearby mountains released by the earthquake. The estimated total amount of new water is ∼106 m3, about 1/40 of the annual water use in the Napa–Sonoma area. Our model also makes a testable prediction of a post-seismic decrease of seismic velocity in the shallow crust of the affected region. PMID:26158898

  19. New streams and springs after the 2014 Mw6.0 South Napa earthquake.

    PubMed

    Wang, Chi-Yuen; Manga, Michael

    2015-07-09

    Many streams and springs, which were dry or nearly dry before the 2014 Mw6.0 South Napa earthquake, started to flow after the earthquake. A United States Geological Survey stream gauge also registered a coseismic increase in discharge. Public interest was heightened by a state of extreme drought in California. Since the new flows were not contaminated by pre-existing surface water, their composition allowed unambiguous identification of their origin. Following the earthquake we repeatedly surveyed the new flows, collecting data to test hypotheses about their origin. We show that the new flows originated from groundwater in nearby mountains released by the earthquake. The estimated total amount of new water is ∼ 10(6) m(3), about 1/40 of the annual water use in the Napa-Sonoma area. Our model also makes a testable prediction of a post-seismic decrease of seismic velocity in the shallow crust of the affected region.

  20. Characterization of the Virginia earthquake effects and source parameters from website traffic analysis

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Lefebvre, S.; Mazet-Roux, G.; Roussel, F.

    2012-12-01

    This paper presents an after the fact study of the Virginia earthquake of 2011 August 23 using only the traffic observed on the EMSC website within minutes of its occurrence. Although the EMSC real time information services remain poorly identified in the US, a traffic surge was observed immediately after the earthquake's occurrence. Such surges, known as flashcrowd and commonly observed on our website after felt events within the Euro-Med region are caused by eyewitnesses looking for information about the shaking they have just felt. EMSC developed an approach named flashsourcing to map the felt area, and in some circumstances, the regions affected by severe damage or network disruption. The felt area is mapped simply by locating the Internet Protocol (IP) addresses of the visitors to the website during these surges while the existence of network disruption is detected by the instantaneous loss at the time of earthquake's occurrence of existing Internet sessions originating from the impacted area. For the Virginia earthquake, which was felt at large distances, the effects of the waves propagation are clearly observed. We show that the visits to our website are triggered by the P waves arrival: the first visitors from a given locality reach our website 90s after their location was shaken by the P waves. From a processing point of view, eyewitnesses can then be considered as ground motion detectors. By doing so, the epicentral location is determined through a simple dedicated location algorithm within 2 min of the earthquake's occurrence and 30 km accuracy. The magnitude can be estimated in similar time frame by using existing empirical relationships between the surface of the felt area and the magnitude. Concerning the effects of the earthquake, we check whether one can discriminate localities affected by strong shaking from web traffic analysis. This is actually the case. Localities affected by strong level of shaking exhibit higher ratio of visitors to the number

  1. MyShake - A smartphone app to detect earthquake

    NASA Astrophysics Data System (ADS)

    Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.

    2015-12-01

    We designed an android app that harnesses the accelerometers in personal smartphones to record earthquake-shaking data for research, hazard information and warnings. The app has the function to distinguish earthquake shakings from daily human activities based on the different patterns behind the movements. It also can be triggered by the traditional earthquake early warning (EEW) system to record for a certain amount of time to collect earthquake data. When the app is triggered by the earthquake-like movements, it sends the trigger information back to our server which contains time and location of the trigger, at the same time, it stores the waveform data on local phone first, and upload to our server later. Trigger information from multiple phones will be processed in real time on the server to find the coherent signal to confirm the earthquakes. Therefore, the app provides the basis to form a smartphone seismic network that can detect earthquake and even provide warnings. A planned public roll-out of MyShake could collect millions of seismic recordings for large earthquakes in many regions around the world.

  2. Natural Time, Nowcasting and the Physics of Earthquakes: Estimation of Seismic Risk to Global Megacities

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Luginbuhl, Molly; Giguere, Alexis; Turcotte, Donald L.

    2018-02-01

    Natural Time ("NT") refers to the concept of using small earthquake counts, for example of M > 3 events, to mark the intervals between large earthquakes, for example M > 6 events. The term was first used by Varotsos et al. (2005) and later by Holliday et al. (2006) in their studies of earthquakes. In this paper, we discuss ideas and applications arising from the use of NT to understand earthquake dynamics, in particular by use of the idea of nowcasting. Nowcasting differs from forecasting, in that the goal of nowcasting is to estimate the current state of the system, rather than the probability of a future event. Rather than focus on an individual earthquake faults, we focus on a defined local geographic region surrounding a particular location. This local region is considered to be embedded in a larger regional setting from which we accumulate the relevant statistics. We apply the nowcasting idea to the practical development of methods to estimate the current state of risk for dozens of the world's seismically exposed megacities, defined as cities having populations of over 1 million persons. We compute a ranking of these cities based on their current nowcast value, and discuss the advantages and limitations of this approach. We note explicitly that the nowcast method is not a model, in that there are no free parameters to be fit to data. Rather, the method is simply a presentation of statistical data, which the user can interpret. Among other results, we find, for example, that the current nowcast ranking of the Los Angeles region is comparable to its ranking just prior to the January 17, 1994 Northridge earthquake.

  3. Measurement of neutron and charged particle fluxes toward earthquake prediction

    NASA Astrophysics Data System (ADS)

    Maksudov, Asatulla U.; Zufarov, Mars A.

    2017-12-01

    In this paper, we describe a possible method for predicting the earthquakes, which is based on simultaneous recording of the intensity of fluxes of neutrons and charged particles by detectors, commonly used in nuclear physics. These low-energy particles originate from radioactive nuclear processes in the Earth's crust. The variations in the particle flux intensity can be the precursor of the earthquake. A description is given of an electronic installation that records the fluxes of charged particles in the radial direction, which are a possible response to the accumulated tectonic stresses in the Earth's crust. The obtained results showed an increase in the intensity of the fluxes for 10 or more hours before the occurrence of the earthquake. The previous version of the installation was able to indicate for the possibility of an earthquake (Maksudov et al. in Instrum Exp Tech 58:130-131, 2015), but did not give information about the direction of the epicenter location. In this regard, the installation was modified by adding eight directional detectors. With the upgraded setup, we have received both the predictive signals, and signals determining the directions of the location of the forthcoming earthquake, starting 2-3 days before its origin.

  4. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the

  5. Historical and recent large megathrust earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Madariaga, R.

    2018-05-01

    Recent earthquakes in Chile, 2014, Mw 8.2 Iquique, 2015, Mw 8.3 Illapel and 2016, Mw 7.6 Chiloé have put in evidence some problems with the straightforward application of ideas about seismic gaps, earthquake periodicity and the general forecast of large megathrust earthquakes. In northern Chile, before the 2014 Iquique earthquake 4 large earthquakes were reported in written chronicles, 1877, 1786, 1615 and 1543; in North-Central Chile, before the 2015 Illapel event, 3 large earthquakes 1943, 1880, 1730 were reported; and the 2016 Chiloé earthquake occurred in the southern zone of the 1960 Valdivia megathrust rupture, where other large earthquakes occurred in 1575, 1737 and 1837. The periodicity of these events has been proposed as a good long-term forecasting. However, the seismological aspects of historical Chilean earthquakes were inferred mainly from old chronicles written before subduction in Chile was discovered. Here we use the original description of earthquakes to re-analyze the historical archives. Our interpretation shows that a-priori ideas, like seismic gaps and characteristic earthquakes, influenced the estimation of magnitude, location and rupture area of the older Chilean events. On the other hand, the advance in the characterization of the rheological aspects that controlled the contact between Nazca and South-American plate and the study of tsunami effects provide better estimations of the location of historical earthquakes along the seismogenic plate interface. Our re-interpretation of historical earthquakes shows a large diversity of earthquakes types; there is a major difference between giant earthquakes that break the entire plate interface and those of Mw 8.0 that only break a portion of it.

  6. Combining Real-Time Seismic and GPS Data for Earthquake Early Warning (Invited)

    NASA Astrophysics Data System (ADS)

    Boese, M.; Heaton, T. H.; Hudnut, K. W.

    2013-12-01

    Scientists at Caltech, UC Berkeley, the Univ. of SoCal, the Univ. of Washington, the US Geological Survey, and ETH Zurich have developed an earthquake early warning (EEW) demonstration system for California and the Pacific Northwest. To quickly determine the earthquake magnitude and location, 'ShakeAlert' currently processes and interprets real-time data-streams from ~400 seismic broadband and strong-motion stations within the California Integrated Seismic Network (CISN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently shared with around 160 individuals, companies, and emergency response organizations to educate potential users about EEW and to identify needs and applications of EEW in a future operational warning system. Recently, scientists at the contributing institutions have started to develop algorithms for ShakeAlert that make use of high-rate real-time GPS data to improve the magnitude estimates for large earthquakes (M>6.5) and to determine slip distributions. Knowing the fault slip in (near) real-time is crucial for users relying on or operating distributed systems, such as for power, water or transportation, especially if these networks run close to or across large faults. As shown in an earlier study, slip information is also useful to predict (in a probabilistic sense) how far a fault rupture will propagate, thus enabling more robust probabilistic ground-motion predictions at distant locations. Finally, fault slip information is needed for tsunami warning, such as in the Cascadia subduction-zone. To handle extended fault-ruptures of large earthquakes in real-time, Caltech and USGS Pasadena are currently developing and testing a two-step procedure that combines seismic and geodetic data; in the first step, high-frequency strong-motion amplitudes are used to rapidly classify near-and far-source stations. Then, the location and

  7. Monitoring the ionosphere during the earthquake on GPS data

    NASA Astrophysics Data System (ADS)

    Smirnov, V. M.; Smirnova, E. V.

    The problem of stability estimation of physical state of an atmosphere attracts a rapt attention of the world community but it is still far from being solved A lot of global atmospheric processes which have direct influence upon all forms of the earth life have been detected The comprehension of cause effect relations stipulating their origin and development is possible only on the basis of long-term sequences of observations data of time-space variations of the atmosphere characteristics which should be received on a global scale and in the interval of altitudes as brand as possible Such data can be obtained only with application satellite systems The latest researches have shown that the satellite systems can be successfully used for global and continuous monitoring ionosphere of the Earth In turn the ionosphere can serve a reliable indicator of different kinds of effects on an environment both of natural and anthropogenic origin Nowadays the problem of the short-term forecast of earthquakes has achieved a new level of understanding There have been revealed indisputable factors which show that the ionosphere anomalies observed during the preparation of seismic events contain the information allowing to detect and to interpret them as earthquake precursors The partial decision of the forecast problem of earthquakes on ionospheric variations requires the processing data received simultaneously from extensive territories Such requirements can be met only on the basis of ground-space system of ionosphere monitoring The navigating systems

  8. Rupture complexity of the Mw 8.3 sea of okhotsk earthquake: Rapid triggering of complementary earthquakes?

    USGS Publications Warehouse

    Wei, Shengji; Helmberger, Don; Zhan, Zhongwen; Graves, Robert

    2013-01-01

    We derive a finite slip model for the 2013 Mw 8.3 Sea of Okhotsk Earthquake (Z = 610 km) by inverting calibrated teleseismic P waveforms. The inversion shows that the earthquake ruptured on a 10° dipping rectangular fault zone (140 km × 50 km) and evolved into a sequence of four large sub-events (E1–E4) with an average rupture speed of 4.0 km/s. The rupture process can be divided into two main stages. The first propagated south, rupturing sub-events E1, E2, and E4. The second stage (E3) originated near E2 with a delay of 12 s and ruptured northward, filling the slip gap between E1 and E2. This kinematic process produces an overall slip pattern similar to that observed in shallow swarms, except it occurs over a compressed time span of about 30 s and without many aftershocks, suggesting that sub-event triggering for deep events is significantly more efficient than for shallow events.

  9. Feasibility of Twitter Based Earthquake Characterization From Analysis of 32 Million Tweets: There's Got to be a Pony in Here Somewhere!

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M. R.; Smoczyk, G. M.; Horvath, S. R.; Jessica, T. S.; Bausch, D. B.

    2014-12-01

    The U.S. Geological Survey (USGS) operates a real-time system that detects earthquakes using only data from Twitter—a service for sending and reading public text-based messages of up to 140 characters. The detector algorithm scans for significant increases in tweets containing the word "earthquake" in several languages and sends internal alerts with the detection time, representative tweet texts, and the location of the population center where most of the tweets originated. It has been running in real-time for over two years and finds, on average, two or three felt events per day, with a false detection rate of 9%. The main benefit of the tweet-based detections is speed, with most detections occurring between 20 and 120 seconds after the earthquake origin time. This is considerably faster than seismic detections in poorly instrumented regions of the world. The detections have reasonable coverage of populated areas globally. The number of Twitter-based detections is small compared to the number of earthquakes detected seismically, and only a rough location and qualitative assessment of shaking can be determined based on Tweet data alone. However, the Twitter-based detections are generally caused by widely felt events in populated urban areas that are of more immediate interest than those with no human impact. We will present a technical overview of the system and investigate the potential for rapid characterization of earthquake damage and effects using the 32 million "earthquake" tweets that the system has so far amassed. Initial results show potential for a correlation between characteristic responses and shaking level. For example, tweets containing the word "terremoto" were common following the MMI VII shaking produced by the April 1, 2014 M8.2 Iquique, Chile earthquake whereas a widely-tweeted deep-focus M5.2 north of Santiago, Chile on April 4, 2014 produced MMI VI shaking and almost exclusively "temblor" tweets. We are also investigating the use of other

  10. Earthquake watch

    USGS Publications Warehouse

    Hill, M.

    1976-01-01

     When the time comes that earthquakes can be predicted accurately, what shall we do with the knowledge? This was the theme of a November 1975 conference on earthquake warning and response held in San Francisco called by Assistant Secretary of the Interior Jack W. Carlson. Invited were officials of State and local governments from Alaska, California, Hawaii, Idaho, Montana, Nevada, utah, Washington, and Wyoming and representatives of the news media. 

  11. GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.

    2015-12-01

    The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.

  12. Security Implications of Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Jha, B.; Rao, A.

    2016-12-01

    The increase in earthquakes induced or triggered by human activities motivates us to research how a malicious entity could weaponize earthquakes to cause damage. Specifically, we explore the feasibility of controlling the location, timing and magnitude of an earthquake by activating a fault via injection and production of fluids into the subsurface. Here, we investigate the relationship between the magnitude and trigger time of an induced earthquake to the well-to-fault distance. The relationship between magnitude and distance is important to determine the farthest striking distance from which one could intentionally activate a fault to cause certain level of damage. We use our novel computational framework to model the coupled multi-physics processes of fluid flow and fault poromechanics. We use synthetic models representative of the New Madrid Seismic Zone and the San Andreas Fault Zone to assess the risk in the continental US. We fix injection and production flow rates of the wells and vary their locations. We simulate injection-induced Coulomb destabilization of faults and evolution of fault slip under quasi-static deformation. We find that the effect of distance on the magnitude and trigger time is monotonic, nonlinear, and time-dependent. Evolution of the maximum Coulomb stress on the fault provides insights into the effect of the distance on rupture nucleation and propagation. The damage potential of induced earthquakes can be maintained even at longer distances because of the balance between pressure diffusion and poroelastic stress transfer mechanisms. We conclude that computational modeling of induced earthquakes allows us to measure feasibility of weaponzing earthquakes and developing effective defense mechanisms against such attacks.

  13. Review of variations in Mw < 7 earthquake motions on position and tec (Mw = 6.5 aegean sea earthquake sample)

    NASA Astrophysics Data System (ADS)

    Yildirim, O.; Inyurt, S.; Mekik, C.

    2015-10-01

    Turkey is a country located in Middle Latitude zone and in which tectonic activity is intensive. Lastly, an earthquake of magnitude 6.5Mw occurred at Aegean Sea offshore on date 24 May 2014 at 12:25 UTC and it lasted approximately 40 s. The said earthquake was felt also in Greece, Romania and Bulgaria in addition to Turkey. In recent years seismic origin ionospheric anomaly detection studies have been done with TEC (Total Electron Contents) generated from GNSS (Global Navigation Satellite System) signals and the findings obtained have been revealed. In this study, TEC and positional variations have been examined seperately regarding the earthquake which occurred in the Aegean Sea. Then The correlation of the said ionospheric variation with the positional variation has been investigated. For this purpose, total fifteen stations have been used among which the data of four numbers of CORS-TR stations in the seismic zone (AYVL, CANA, IPSA, YENC) and IGS and EUREF stations are used. The ionospheric and positional variations of AYVL, CANA, IPSA and YENC stations have been examined by Bernese 5.0v software. When the (PPP-TEC) values produced as result of the analysis are examined, it has been understood that in the four stations located in Turkey, three days before the earthquake at 08:00 and 10:00 UTC, the TEC values were approximately 4 TECU above the upper limit TEC value. Still in the same stations, one day before the earthquake at 06:00, 08:00 and 10:00 UTC, it is being shown that the TEC values were approximately 5 TECU below the lower limit TEC value. On the other hand, the GIM-TEC values published by the CODE center have been examined. Still in all stations, it has been observed that three days before the earthquake the TEC values in the time portions of 08:00 and 10:00 UTC were approximately 2 TECU above, one day before the earthquake at 06:00, 08:00 and 10:00 UTC, the TEC values were approximately 4 TECU below the lower limit TEC value. Again, by using the same

  14. Earthquake Early Warning ShakeAlert System: Testing and certification platform

    USGS Publications Warehouse

    Cochran, Elizabeth S.; Kohler, Monica D.; Given, Douglas; Guiwits, Stephen; Andrews, Jennifer; Meier, Men-Andrin; Ahmad, Mohammad; Henson, Ivan; Hartog, Renate; Smith, Deborah

    2017-01-01

    Earthquake early warning systems provide warnings to end users of incoming moderate to strong ground shaking from earthquakes. An earthquake early warning system, ShakeAlert, is providing alerts to beta end users in the western United States, specifically California, Oregon, and Washington. An essential aspect of the earthquake early warning system is the development of a framework to test modifications to code to ensure functionality and assess performance. In 2016, a Testing and Certification Platform (TCP) was included in the development of the Production Prototype version of ShakeAlert. The purpose of the TCP is to evaluate the robustness of candidate code that is proposed for deployment on ShakeAlert Production Prototype servers. TCP consists of two main components: a real‐time in situ test that replicates the real‐time production system and an offline playback system to replay test suites. The real‐time tests of system performance assess code optimization and stability. The offline tests comprise a stress test of candidate code to assess if the code is production ready. The test suite includes over 120 events including local, regional, and teleseismic historic earthquakes, recentering and calibration events, and other anomalous and potentially problematic signals. Two assessments of alert performance are conducted. First, point‐source assessments are undertaken to compare magnitude, epicentral location, and origin time with the Advanced National Seismic System Comprehensive Catalog, as well as to evaluate alert latency. Second, we describe assessment of the quality of ground‐motion predictions at end‐user sites by comparing predicted shaking intensities to ShakeMaps for historic events and implement a threshold‐based approach that assesses how often end users initiate the appropriate action, based on their ground‐shaking threshold. TCP has been developed to be a convenient streamlined procedure for objectively testing algorithms, and it has

  15. Testing an Earthquake Prediction Algorithm: The 2016 New Zealand and Chile Earthquakes

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2017-05-01

    The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquake prediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

  16. Predecessors of the giant 1960 Chile earthquake

    USGS Publications Warehouse

    Cisternas, M.; Atwater, B.F.; Torrejon, F.; Sawai, Y.; Machuca, G.; Lagos, M.; Eipert, A.; Youlton, C.; Salgado, I.; Kamataki, T.; Shishikura, M.; Rajendran, C.P.; Malik, J.K.; Rizal, Y.; Husni, M.

    2005-01-01

    It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended. ?? 2005 Nature Publishing Group.

  17. Predecessors of the giant 1960 Chile earthquake.

    PubMed

    Cisternas, Marco; Atwater, Brian F; Torrejón, Fernando; Sawai, Yuki; Machuca, Gonzalo; Lagos, Marcelo; Eipert, Annaliese; Youlton, Cristián; Salgado, Ignacio; Kamataki, Takanobu; Shishikura, Masanobu; Rajendran, C P; Malik, Javed K; Rizal, Yan; Husni, Muhammad

    2005-09-15

    It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended.

  18. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

  19. The analysis results of EEWS(Earthquake Early Warning System) about Iksan(Ml4.3) and Ulsan(Ml5.0) earthquakes in Korea

    NASA Astrophysics Data System (ADS)

    Park, J. H.; Chi, H. C.; Lim, I. S.; Seong, Y. J.; Pak, J.

    2016-12-01

    EEW(Earthquake Early Warning) service to the public has been officially operated by KMA (Korea Meteorological Administration) from 2015 in Korea. For the KMA's official EEW service, KIGAM has adopted ElarmS from UC Berkeley BSL and modified local magnitude relation, 1-D travel time curves and association procedures with real time waveforms from about 160 seismic stations of KMA and KIGAM. We have checked the performance of EEWS(Earthquake Early Warning System) reviewing two moderate size earthquakes: one is Iksan Eq.(Ml4.3) inside of networks and the other is Ulsan Eq.(Ml5.0) happened at the southern east sea of Korea outside of networks. The first trigger time at NPR station of the Iksan Eq. took 2.3 sec and BUY and JEO2 stations were associated to produce the first event version in 10.07 sec from the origin time respectively. Because the epicentral distance of JEO2 station is about 30 km and the estimated travel time is 6.2 sec, the delay time including transmission and processing is estimated as 3.87 sec with assumption that P wave velocity is 5 km/sec and the focal depth is 8 km. The first magnitude was M4.9 which was a little bigger than Ml4.3 by KIGAM. After adding 3 more triggers of stations (CHO, KMSA, PORA), the estimated magnitude became to M4.6 and the final was settled down to M4.3 with 10 stations. In the case of Ulsan the first trigger time took 11.04 sec and the first alert time with 3 stations in 14.8 sec from the origin time (OT) respectively. The first magnitude was M5.2, however, the difference between the first EEW epicenter and the manual final result was about 63 km due to the poor azimuth coverage outside of seismic network. After 16.2 sec from OT the fourth station YSB was used to update the location near to the manual results within 6 km with magnitude 5.0 and location and magnitude were stable with more stations. Ulsan Eq. was the first case announced to the public by EEWS and the process and result were successful, however, we have to

  20. A Statistical Correlation Between Low L-shell Electrons Measured by NOAA Satellites and Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Fidani, C.

    2015-12-01

    More than 11 years of the Medium Energy Protons Electrons Detector data from the NOAA polar orbiting satellites were analyzed. Significant electron counting rate fluctuations were evidenced during geomagnetic quiet periods by using a set of adiabatic coordinates. Electron counting rates were compared to earthquakes by defining a seismic event L-shell obtained radially projecting the epicenter geographical positions to a given altitude. Counting rate fluctuations were grouped in every satellite semi-orbit together with strong seismic events and these were chosen with the L-shell coordinates close to each other. Electron data from July 1998 to December 2011 were compared for nearly 1,800 earthquakes with magnitudes larger than or equal to 6, occurring worldwide. When considering 30 - 100 keV energy channels by the vertical NOAA telescopes and earthquake epicenter projections at altitudes greater that 1,300 km, a 4 sigma correlation appeared where time of particle precipitations Tpp occurred 2 - 3 hour prior time of large seismic events Teq. This was in physical agreement with different correlation times obtained from past studies that considered particles with greater energies. The correlation suggested a 4-8 hour advance in preparedness of strong earthquakes influencing the ionosphere. Considering this strong correlation between earthquakes and electron rate fluctuations, and the hypothesis that such fluctuations originated with magnetic disturbances generated underground, a small scale experiment with low cost at ground level is advisable. Plans exists to perform one or more unconventional experiments around an earthquake affected area by private investor in Italy.

  1. An original approach to fill the gap in the earthquake disaster experience - a proposal for 'the archive of the quake experience' -

    NASA Astrophysics Data System (ADS)

    Tanaka, Y.; Hirayama, Y.; Kuroda, S.; Yoshida, M.

    2015-12-01

    People without severe disaster experience infallibly forget even the extraordinary one like 3.11 as time advances. Therefore, to improve the resilient society, an ingenious attempt to keep people's memory of disaster not to fade away is necessary. Since 2011, we have been caring out earthquake disaster drills for residents of high-rise apartments, for schoolchildren, for citizens of the coastal area, etc. Using a portable earthquake simulator (1), the drill consists of three parts, the first: a short lecture explaining characteristic quakes expected for Japanese people to have in the future, the second: reliving experience of major earthquakes hit Japan since 1995, and the third: a short lecture for preparation that can be done at home and/or in an office. For the quake experience, although it is two dimensional movement, the real earthquake observation record is used to control the simulator to provide people to relive an experience of different kinds of earthquake including the long period motion of skyscrapers. Feedback of the drill is always positive because participants understand that the reliving the quake experience with proper lectures is one of the best method to communicate the past disasters to their family and to inherit them to the next generation. There are several kinds of archive for disaster as inheritance such as pictures, movies, documents, interviews, and so on. In addition to them, here we propose to construct 'the archive of the quake experience' which compiles observed data ready to relive with the simulator. We would like to show some movies of our quake drill in the presentation. Reference: (1) Kuroda, S. et al. (2012), "Development of portable earthquake simulator for enlightenment of disaster preparedness", 15th World Conference on Earthquake Engineering 2012, Vol. 12, 9412-9420.

  2. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  3. Applying time-reverse-imaging techniques to locate individual low-frequency earthquakes on the San Andreas fault near Cholame, California

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E.; Shelly, D. R.

    2013-12-01

    Observations of non-volcanic tremor have become ubiquitous in recent years. In spite of the abundance of observations, locating tremor remains a difficult task because of the lack of distinctive phase arrivals. Here we use time-reverse-imaging techniques that do not require identifying phase arrivals to locate individual low-frequency-earthquakes (LFEs) within tremor episodes on the San Andreas fault near Cholame, California. Time windows of 1.5-second duration containing LFEs are selected from continuously recorded waveforms of the local seismic network filtered between 1-5 Hz. We propagate the time-reversed seismic signal back through the subsurface using a staggered-grid finite-difference code. Assuming all rebroadcasted waveforms result from similar wave fields at the source origin, we search for wave field coherence in time and space to obtain the source location and origin time where the constructive interference is a maximum. We use an interpolated velocity model with a grid spacing of 100 m and a 5 ms time step to calculate the relative curl field energy amplitudes for each rebroadcasted seismogram every 50 ms for each grid point in the model. Finally, we perform a grid search for coherency in the curl field using a sliding time window, and taking the absolute value of the correlation coefficient to account for differences in radiation pattern. The highest median cross-correlation coefficient value over at a given grid point indicates the source location for the rebroadcasted event. Horizontal location errors based on the spatial extent of the highest 10% cross-correlation coefficient are on the order of 4 km, and vertical errors on the order of 3 km. Furthermore, a test of the method using earthquake data shows that the method produces an identical hypocentral location (within errors) as that obtained by standard ray-tracing methods. We also compare the event locations to a LFE catalog that locates the LFEs from stacked waveforms of repeated LFEs

  4. Tridimensional reconstruction of the Co-Seismic Ionospheric Disturbance around the time of 2015 Nepal earthquake

    NASA Astrophysics Data System (ADS)

    Kong, Jian; Yao, Yibin; Zhou, Chen; Liu, Yi; Zhai, Changzhi; Wang, Zemin; Liu, Lei

    2018-01-01

    The Co-Seismic Ionospheric Disturbance of the 2015 Nepal earthquake is analyzed in this paper. GNSS data are used to obtain the Satellite-Station TEC sequences. After removing the de-trended TEC variation, a clear ionospheric disturbance was observed 10 min after the earthquake, while the geomagnetic conditions, solar activity, and weather condition remained calm according to the Kp, Dst, F10.7 indices and meteorological records during the period of interest. Computerized ionosphere tomography (CIT) is then used to present the tridimensional ionosphere variation with a 10-min time resolution. The CIT results indicate that (1) the disturbance of the ionospheric electron density above the epicenter during the 2015 Nepal earthquake is confined at a relatively low altitude (approximately 150-300 km); (2) the ionospheric disturbances on the west side and east sides of the epicenter are precisely opposite. A newly established electric field penetration model of the lithosphere-atmosphere-ionosphere coupling is used to investigate the potential physical mechanism.

  5. Tectonics earthquake distribution pattern analysis based focal mechanisms (Case study Sulawesi Island, 1993–2012)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismullah M, Muh. Fawzy, E-mail: mallaniung@gmail.com; Lantu,; Aswad, Sabrianto

    Indonesia is the meeting zone between three world main plates: Eurasian Plate, Pacific Plate, and Indo – Australia Plate. Therefore, Indonesia has a high seismicity degree. Sulawesi is one of whose high seismicity level. The earthquake centre lies in fault zone so the earthquake data gives tectonic visualization in a certain place. This research purpose is to identify Sulawesi tectonic model by using earthquake data from 1993 to 2012. Data used in this research is the earthquake data which consist of: the origin time, the epicenter coordinate, the depth, the magnitude and the fault parameter (strike, dip and slip). Themore » result of research shows that there are a lot of active structures as a reason of the earthquake in Sulawesi. The active structures are Walannae Fault, Lawanopo Fault, Matano Fault, Palu – Koro Fault, Batui Fault and Moluccas Sea Double Subduction. The focal mechanism also shows that Walannae Fault, Batui Fault and Moluccas Sea Double Subduction are kind of reverse fault. While Lawanopo Fault, Matano Fault and Palu – Koro Fault are kind of strike slip fault.« less

  6. Time-lapse changes in velocity and anisotropy in Japan's near surface after the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Snieder, R.; Nakata, N.

    2012-12-01

    A strong-motion recording network, KiK-net, helps us to monitor temporal changes in the near surface in Japan. Each KiK-net station has two seismometers at the free surface and in a borehole a few hundred meters deep, and we can retrieve a traveling wave from the borehole receiver to the surface receiver by applying deconvolution based seismic interferometry. KiK-net recorded the 2011 Tohoku earthquake, which is one of the largest earthquakes in recent history, and seismicity around the time of the main shock. Using records of these seismicity and computing mean values of near-surface shear-wave velocities in the periods of January 1--March 10 and March 12--May 26 in 2011, we detect about a 5% reduction in the velocity after the Tohoku earthquake. The area of the velocity reduction is about 1,200 km wide, which is much wider than earlier studies reporting velocity reductions after larger earthquakes. The reduction partly recovers with time. We can also estimate the azimuthal anisotropy by detecting shear-wave splitting after applying seismic interferometry. Estimating mean values over the same periods as the velocity, we find the strength of anisotropy increased in most parts of northeastern Japan, but fast shear-wave polarization directions in the near surface did not significantly change. The changes in anisotropy and velocity are generally correlated, especially in the northeastern Honshu (the main island in Japan).

  7. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    NASA Astrophysics Data System (ADS)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  8. Overestimation of the earthquake hazard along the Himalaya: constraints in bracketing of medieval earthquakes from paleoseismic studies

    NASA Astrophysics Data System (ADS)

    Arora, Shreya; Malik, Javed N.

    2017-12-01

    The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

  9. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  10. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  11. Earthquakes induced by fluid injection and explosion

    USGS Publications Warehouse

    Healy, J.H.; Hamilton, R.M.; Raleigh, C.B.

    1970-01-01

    Earthquakes generated by fluid injection near Denver, Colorado, are compared with earthquakes triggered by nuclear explosion at the Nevada Test Site. Spatial distributions of the earthquakes in both cases are compatible with the hypothesis that variation of fluid pressure in preexisting fractures controls the time distribution of the seismic events in an "aftershock" sequence. We suggest that the fluid pressure changes may also control the distribution in time and space of natural aftershock sequences and of earthquakes that have been reported near large reservoirs. ?? 1970.

  12. Seismicity map tools for earthquake studies

    NASA Astrophysics Data System (ADS)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  13. Very low frequency earthquakes in Tohoku-Oki recorded by short-period ocean bottom seismographs

    NASA Astrophysics Data System (ADS)

    Takahashi, H.; Hino, R.; Ohta, Y.; Uchida, N.; Suzuki, S.; Shinohara, M.; Nakatani, Y.; Matsuzawa, T.

    2017-12-01

    Various kind of slow earthquakes have been found along many plate boundary zones in the world (Obara, and Kato, 2016). In the Tohoku subduction zone where slow event activities have been considered insignificant, slow slip events associated with low frequency tremors were identified prior to the 2011 Tohoku-Oki earthquake based on seafloor geodetic and seismographical observations. Recently very low frequency earthquakes (VLFEs) have been discovered by inspecting onshore broad-band seismograms. Although the activity of the detected VLFEs is low and the VLFEs occurred in the limited area, VLFEs tends to occur successively in a short time period. In this study, we try to characterize the VLFEs along the Japan Trench based on the seismograms obtained by the instruments deployed near the estimated epicenters.Temporary seismic observations using Ocean Bottom Seismometers (OBSs) have been carried out several times after the 2011 Tohoku-Oki earthquake, and several VLFE activities were observed during the deployments of the OBSs. Amplitudes of horizontal component seismograms of the OBSs grow shortly after the estimated origin times of the VLFEs identified by the onshore seismograms, even though the sensors are 4.5 Hz geophones. It is difficult to recognize evident onsets of P or S waves, correspondence between order of arrivals of discernible wave packets and their amplitudes suggests that these wave packets are seismic signals radiated from the VLFE sources. The OBSs detect regular local earthquakes of the similar magnitudes as the VLFEs. Signal powers of the possible VLFE seismograms are comparable to the regular earthquakes in the frequency range < 1 Hz, while significant deficiency of higher frequency components are observed.

  14. A model of return intervals between earthquake events

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger

    2016-06-01

    Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.

  15. Major earthquakes recorded by Speleothems in Midwestern U.S. caves

    USGS Publications Warehouse

    Panno, S.V.; Lundstrom, C.C.; Hackley, Keith C.; Curry, B. Brandon; Fouke, B.W.; Zhang, Z.

    2009-01-01

    Historic earthquakes generated by the New Madrid seismic zone represent some of the largest recorded in the United States, yet prehistoric events are recognized only through deformation in late-Wisconsin to Holocene-age, near surface sediments (liquefaction, monoclinal folding, and changes in river meanders). In this article, we show that speleothems in caves of southwestern Illinois and southeastern Missouri may constitute a previously unrecognized recorder of large earthquakes in the U.S. midcontinent region. The timing of the initiation and regrowth of stalagmites in southwestern Illinois and southeastern Missouri caves is consistent with the historic and prehistoric record of several known seismic events in the U.S. midcontinent region. We conclude that dating the initiation of original stalagmite growth and later postearthquake rejuvenation constitutes a new paleoseismic method that has the potential for being applied to any region around the world in the vicinity of major seismic zones where caves exist. Use of this technique could expand the geographical distribution of paleoseimic data, document prehistoric earthquakes, and help improve interpretations of paleoearthquakes.

  16. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    NASA Astrophysics Data System (ADS)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  17. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  18. GPS Time Series Analysis of Southern California Associated with the 2010 M7.2 El Mayor/Cucapah Earthquake

    NASA Technical Reports Server (NTRS)

    Granat, Robert; Donnellan, Andrea

    2011-01-01

    The Magnitude 7.2 El-Mayor/Cucapah earthquake the occurred in Mexico on April 4, 2012 was well instrumented with continuous GPS stations in California. Large Offsets were observed at the GPS stations as a result of deformation from the earthquake providing information about the co-seismic fault slip as well as fault slip from large aftershocks. Information can also be obtained from the position time series at each station.

  19. Earthquake hazards: a national threat

    USGS Publications Warehouse

    ,

    2006-01-01

    Earthquakes are one of the most costly natural hazards faced by the Nation, posing a significant risk to 75 million Americans in 39 States. The risks that earthquakes pose to society, including death, injury, and economic loss, can be greatly reduced by (1) better planning, construction, and mitigation practices before earthquakes happen, and (2) providing critical and timely information to improve response after they occur. As part of the multi-agency National Earthquake Hazards Reduction Program, the U.S. Geological Survey (USGS) has the lead Federal responsibility to provide notification of earthquakes in order to enhance public safety and to reduce losses through effective forecasts based on the best possible scientific information.

  20. Earthquake swarm in the non-volcanic area north of Harrat Lunayyir, western Saudi Arabia: observations and imaging

    NASA Astrophysics Data System (ADS)

    Youssof, M.; Mai, P. M.; Parisi, L.; Tang, Z.; Zahran, H. M.; El-Hadidy, S. Y.; Al-Raddadi, W.; Sami, M.; El-Hadidy, M. S. Y.

    2017-12-01

    We report on an unusual earthquake swarm in a non-volcanic area of western Saudi Arabia. Since March 2017, hundreds of earthquakes were recorded, reaching magnitude Ml 3.7, which occurred within a very narrowly defined rock volume. The seismicity is shallow, mostly between 4 to 8 km depths, with some events reaching as deep as 16 km. One set of events aligns into a well-defined horizontal tube of 2 km height, 1 km width, and 4-5 km E-W extent. Other event clusters exist, but are less well-defined. The focal mechanism solutions of the largest earthquakes indicate normal faulting, which agree with the regional stress field. The earthquake swarm occurs 75 km NW of Harrat Lunayyir. However, the area of interest doesn't seem to be associated with the well-known volcanic area of Harrat Lunayyir, which experienced a magmatic dike intrusion in 2009 with intense seismic activity (including a surface rupturing Mw 5.7 earthquake). Furthermore, the study area is characterized by a complex shear system, which host gold mineralization. Therefore, the exact origin of the swarm sequence is enigmatic as it's the first of its kind in this region. By using continuous seismological data recorded by the Saudi Geological Survey (SGS) that operates three permanent seismic stations and a temporary network of 11 broadband sensors, we analyze the seismic patterns in space and time. For the verified detected events, we assemble the body wave arrival times that are inverted for the velocity structures along with events hypocenters to investigate possible causes of this swarm sequence, that is, whether the activity is of tectonic- or hydro-thermal origin.

  1. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  2. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  3. Post-earthquake building safety inspection: Lessons from the Canterbury, New Zealand, earthquakes

    USGS Publications Warehouse

    Marshall, J.; Jaiswal, Kishor; Gould, N.; Turner, F.; Lizundia, B.; Barnes, J.

    2013-01-01

    The authors discuss some of the unique aspects and lessons of the New Zealand post-earthquake building safety inspection program that was implemented following the Canterbury earthquake sequence of 2010–2011. The post-event safety assessment program was one of the largest and longest programs undertaken in recent times anywhere in the world. The effort engaged hundreds of engineering professionals throughout the country, and also sought expertise from outside, to perform post-earthquake structural safety inspections of more than 100,000 buildings in the city of Christchurch and the surrounding suburbs. While the building safety inspection procedure implemented was analogous to the ATC 20 program in the United States, many modifications were proposed and implemented in order to assess the large number of buildings that were subjected to strong and variable shaking during a period of two years. This note discusses some of the key aspects of the post-earthquake building safety inspection program and summarizes important lessons that can improve future earthquake response.

  4. A Link between ORC-Origin Binding Mechanisms and Origin Activation Time Revealed in Budding Yeast

    PubMed Central

    Hoggard, Timothy; Shor, Erika; Müller, Carolin A.; Nieduszynski, Conrad A.; Fox, Catherine A.

    2013-01-01

    Eukaryotic DNA replication origins are selected in G1-phase when the origin recognition complex (ORC) binds chromosomal positions and triggers molecular events culminating in the initiation of DNA replication (a.k.a. origin firing) during S-phase. Each chromosome uses multiple origins for its duplication, and each origin fires at a characteristic time during S-phase, creating a cell-type specific genome replication pattern relevant to differentiation and genome stability. It is unclear whether ORC-origin interactions are relevant to origin activation time. We applied a novel genome-wide strategy to classify origins in the model eukaryote Saccharomyces cerevisiae based on the types of molecular interactions used for ORC-origin binding. Specifically, origins were classified as DNA-dependent when the strength of ORC-origin binding in vivo could be explained by the affinity of ORC for origin DNA in vitro, and, conversely, as ‘chromatin-dependent’ when the ORC-DNA interaction in vitro was insufficient to explain the strength of ORC-origin binding in vivo. These two origin classes differed in terms of nucleosome architecture and dependence on origin-flanking sequences in plasmid replication assays, consistent with local features of chromatin promoting ORC binding at ‘chromatin-dependent’ origins. Finally, the ‘chromatin-dependent’ class was enriched for origins that fire early in S-phase, while the DNA-dependent class was enriched for later firing origins. Conversely, the latest firing origins showed a positive association with the ORC-origin DNA paradigm for normal levels of ORC binding, whereas the earliest firing origins did not. These data reveal a novel association between ORC-origin binding mechanisms and the regulation of origin activation time. PMID:24068963

  5. Napa earthquake: An earthquake in a highly connected world

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.

    2014-12-01

    The Napa earthquake recently occurred close to Silicon Valley. This makes it a good candidate to study what social networks, wearable objects and website traffic analysis (flashsourcing) can tell us about the way eyewitnesses react to ground shaking. In the first part, we compare the ratio of people publishing tweets and with the ratio of people visiting EMSC (European Mediterranean Seismological Centre) real time information website in the first minutes following the earthquake occurrence to the results published by Jawbone, which show that the proportion of people waking up depends (naturally) on the epicentral distance. The key question to evaluate is whether the proportions of inhabitants tweeting or visiting the EMSC website are similar to the proportion of people waking up as shown by the Jawbone data. If so, this supports the premise that all methods provide a reliable image of the relative ratio of people waking up. The second part of the study focuses on the reaction time for both Twitter and EMSC website access. We show, similarly to what was demonstrated for the Mineral, Virginia, earthquake (Bossu et al., 2014), that hit times on the EMSC website follow the propagation of the P waves and that 2 minutes of website traffic is sufficient to determine the epicentral location of an earthquake on the other side of the Atlantic. We also compare with the publication time of messages on Twitter. Finally, we check whether the number of tweets and the number of visitors relative to the number of inhabitants is correlated to the local level of shaking. Together these results will tell us whether the reaction of eyewitnesses to ground shaking as observed through Twitter and the EMSC website analysis is tool specific (i.e. specific to Twitter or EMSC website) or whether they do reflect people's actual reactions.

  6. Centrality in earthquake multiplex networks

    NASA Astrophysics Data System (ADS)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  7. CISN ShakeAlert: Improving the Virtual Seismologist (VS) earthquake early warning framework to provide faster, more robust warning information

    NASA Astrophysics Data System (ADS)

    Meier, M.; Cua, G. B.; Wiemer, S.; Fischer, M.

    2011-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) that uses observed phase arrivals, ground motion amplitudes and selected prior information to estimate earthquake magnitude, location and origin time, and predict the distribution of peak ground motion throughout a region using envelope attenuation relationships. Implementation of the VS algorithm in California is an on-going effort of the Swiss Seismological Service (SED) at ETH Zürich. VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) - that form the basis of the California Integrated Seismic Network ShakeAlert system, a prototype end-to-end EEW system that could potentially be implemented in California. The current prototype version of VS in California requires picks at 4 stations to initiate an event declaration. On average, taking into account data latency, variable station distribution, and processing time, this initial estimate is available about 20 seconds after the earthquake origin time, corresponding to a blind zone of about 70 km around the epicenter which would receive no warning, but where it would be the most useful. To increase the available warning time, we want to produce EEW estimates faster (with less than 4 stations). However, working with less than 4 stations with our current approach would increase the number of false alerts, for which there is very little tolerance in a useful EEW system. We explore the use of back-azimuth estimations and the Voronoi-based concept of not-yet-arrived data for reducing false alerts of the earliest VS estimates. The concept of not-yet-arrived data was originally used to provide evolutionary location estimates in EEW (Horiuchi, 2005; Cua and Heaton, 2007; Satriano et al. 2008). However, it can also be applied in discriminating between earthquake and non-earthquake signals. For real earthquakes, the

  8. Application of the region-time-length algorithm to study of earthquake precursors in the Thailand-Laos-Myanmar borders

    NASA Astrophysics Data System (ADS)

    Puangjaktha, P.; Pailoplee, S.

    2018-04-01

    In order to examine the precursory seismic quiescence of upcoming hazardous earthquakes, the seismicity data available in the vicinity of the Thailand-Laos-Myanmar borders was analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 parameters, those of r0 = 120 km and t0 = 2 yr yielded reasonable estimates of the anomalous RTL scores, in both temporal variation and spatial distribution, of a few years prior to five out of eight strong-to-major recognized earthquakes. Statistical evaluation of both the correlation coefficient and stochastic process for the RTL were checked and revealed that the RTL score obtained here excluded artificial or random phenomena. Therefore, the prospective earthquake sources mentioned here should be recognized and effective mitigation plans should be provided.

  9. Simulating Earthquakes for Science and Society: New Earthquake Visualizations Ideal for Use in Science Communication

    NASA Astrophysics Data System (ADS)

    de Groot, R. M.; Benthien, M. L.

    2006-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  10. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  11. Earthquake Hazard Analysis Methods: A Review

    NASA Astrophysics Data System (ADS)

    Sari, A. M.; Fakhrurrozi, A.

    2018-02-01

    One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.

  12. Future of Earthquake Early Warning: Quantifying Uncertainty and Making Fast Automated Decisions for Applications

    NASA Astrophysics Data System (ADS)

    Wu, Stephen

    Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications. Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake. To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that

  13. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    USGS Publications Warehouse

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  14. Understanding and responding to earthquake hazards

    NASA Technical Reports Server (NTRS)

    Raymond, C. A.; Lundgren, P. R.; Madsen, S. N.; Rundle, J. B.

    2002-01-01

    Advances in understanding of the earthquake cycle and in assessing earthquake hazards is a topic of great importance. Dynamic earthquake hazard assessments resolved for a range of spatial scales and time scales will allow a more systematic approach to prioritizing the retrofitting of vulnerable structures, relocating populations at risk, protecting lifelines, preparing for disasters, and educating the public.

  15. Rupture imaging of the Mw 7.9 12 May 2008 Wenchuan earthquake from back projection of teleseismic P waves

    USGS Publications Warehouse

    Xu, Y.; Koper, K.D.; Sufri, O.; Zhu, L.; Hutko, Alexander R.

    2009-01-01

    [1] The Mw 7.9 Wenchuan earthquake of 12 May 2008 was the most destructive Chinese earthquake since the 1976 Tangshan event. Tens of thousands of people were killed, hundreds of thousands were injured, and millions were left homeless. Here we infer the detailed rupture process of the Wenchuan earthquake by back-projecting teleseismic P energy from several arrays of seismometers. This technique has only recently become feasible and is potentially faster than traditional finite-fault inversion of teleseismic body waves; therefore, it may reduce the notification time to emergency response agencies. Using the IRIS DMC, we collected 255 vertical component broadband P waves at 30-95?? from the epicenter. We found that at periods of 5 s and greater, nearly all of these P waves were coherent enough to be used in a global array. We applied a simple down-sampling heuristic to define a global subarray of 70 stations that reduced the asymmetry and sidelobes of the array response function (ARF). We also considered three regional subarrays of seismometers in Alaska, Australia, and Europe that had apertures less than 30?? and P waves that were coherent to periods as short as 1 s. Individual ARFs for these subarrays were skewed toward the subarrays; however, the linear sum of the regional subarray beams at 1 s produced a symmetric ARF, similar to that of the groomed global subarray at 5 s. For both configurations we obtained the same rupture direction, rupture length, and rupture time. We found that the Wenchuan earthquake had three distinct pulses of high beam power at 0, 23, and 57 s after the origin time, with the pulse at 23 s being highest, and that it ruptured unilaterally to the northeast for about 300 km and 110 s, with an average speed of 2.8 km/s. It is possible that similar results can be determined for future large dip-slip earthquakes within 20-30 min of the origin time using relatively sparse global networks of seismometers such as those the USGS uses to locate

  16. Estimates of the maximum time required to originate life

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.; Fogleman, Guy

    1989-01-01

    Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.

  17. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  18. Time-lag of the earthquake energy release between three seismic regions

    NASA Astrophysics Data System (ADS)

    Tsapanos, Theodoros M.; Liritzis, Ioannis

    1992-06-01

    Three complete data sets of strong earthquakes ( M≥5.5), which occurred in the seismic regions of Chile, Mexico and Kamchatka during the time period 1899 1985, have been used to test the existence of a time-lag in the seismic energy release between these regions. These data sets were cross-correlated in order to determine whether any pair of the sets are correlated. For this purpose statistical tests, such as the T-test, the Fisher's transformation and probability distribution have been applied to determine the significance of the obtained correlation coefficients. The results show that the time-lag between Chile and Kamchatka is -2, which means that Kamchatka precedes Chile by 2 years, with a correlation coefficient significant at 99.80% level, a weak correlation between Kamchatka-Mexico and noncorrelation for Mexico-Chile.

  19. Global Instrumental Seismic Catalog: earthquake relocations for 1900-present

    NASA Astrophysics Data System (ADS)

    Villasenor, A.; Engdahl, E.; Storchak, D. A.; Bondar, I.

    2010-12-01

    We present the current status of our efforts to produce a set of homogeneous earthquake locations and improved focal depths towards the compilation of a Global Catalog of instrumentally recorded earthquakes that will be complete down to the lowest magnitude threshold possible on a global scale and for the time period considered. This project is currently being carried out under the auspices of GEM (Global Earthquake Model). The resulting earthquake catalog will be a fundamental dataset not only for earthquake risk modeling and assessment on a global scale, but also for a large number of studies such as global and regional seismotectonics; the rupture zones and return time of large, damaging earthquakes; the spatial-temporal pattern of moment release along seismic zones and faults etc. Our current goal is to re-locate all earthquakes with available station arrival data using the following magnitude thresholds: M5.5 for 1964-present, M6.25 for 1918-1963, M7.5 (complemented with significant events in continental regions) for 1900-1917. Phase arrival time data for earthquakes after 1963 are available in digital form from the International Seismological Centre (ISC). For earthquakes in the time period 1918-1963, phase data is obtained by scanning the printed International Seismological Summary (ISS) bulletins and applying optical character recognition routines. For earlier earthquakes we will collect phase data from individual station bulletins. We will illustrate some of the most significant results of this relocation effort, including aftershock distributions for large earthquakes, systematic differences in epicenter and depth with respect to previous location, examples of grossly mislocated events, etc.

  20. Prompt Assessment of Global Earthquakes for Response (PAGER): A System for Rapidly Determining the Impact of Earthquakes Worldwide

    USGS Publications Warehouse

    Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy

    2009-01-01

    Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.

  1. Earthquake Archaeology: a logical approach?

    NASA Astrophysics Data System (ADS)

    Stewart, I. S.; Buck, V. A.

    2001-12-01

    Ancient earthquakes can leave their mark in the mythical and literary accounts of ancient peoples, the stratigraphy of their site histories, and the structural integrity of their constructions. Within this broad cross-disciplinary tramping ground, earthquake geologists have tended to focus on those aspects of the cultural record that are most familiar to them; the physical effects of seismic deformation on ancient constructions. One of the core difficulties with this 'earthquake archaeology' approach is that recent attempts to isolate structural criteria that are diagnostic or strongly suggestive of a seismic origin are undermined by the recognition that signs of ancient seismicity are generally indistinguishable from non-seismic mechanisms (poor construction, adverse geotechnical conditions). We illustrate the difficulties and inconsistencies in current proposed 'earthquake diagnostic' schemes by reference to two case studies of archaeoseismic damage in central Greece. The first concerns fallen columns at various Classical temple localities in mainland Greece (Nemea, Sounio, Olympia, Bassai) which, on the basis of observed structural criteria, are earthquake-induced but which are alternatively explained by archaeologists as the action of human disturbance. The second re-examines the almost type example of the Kyparissi site in the Atalanti region as a Classical stoa offset across a seismic surface fault, arguing instead for its deformation by ground instability. Finally, in highlighting the inherent ambiguity of archaeoseismic data, we consider the value of a logic-tree approach for quantifying and quantifying our uncertainities for seismic-hazard analysis.

  2. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  3. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  4. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  5. Is there a basis for preferring characteristic earthquakes over a Gutenberg–Richter distribution in probabilistic earthquake forecasting?

    USGS Publications Warehouse

    Parsons, Thomas E.; Geist, Eric L.

    2009-01-01

    The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.

  6. Earthquake Signal Visible in GRACE Data

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Figure1

    This figure shows the effect of the December 2004 great Sumatra earthquake on the Earth's gravity field as observed by GRACE. The signal is expressed in terms of the relative acceleration of the two GRACE satellites, in this case a few nanometers per second squared, or about 1 billionth of the acceleration we experience everyday at the Earth's surface.GRACE observations show comparable signals in the region of the earthquake.

    Other natural variations are also apparent in the expected places, whereas no other significant change would be expected in the region of the earthquake

    GRACE, twin satellites launched in March 2002, are making detailed measurements of Earth's gravity field which will lead to discoveries about gravity and Earth's natural systems. These discoveries could have far-reaching benefits to society and the world's population.

  7. Aftershock sequence of ML6.1 earthquake in Sakhalin: recovery with waveform cross correlation

    NASA Astrophysics Data System (ADS)

    Kitov, Ivan; Konovalov, Alexey; Stepnov, Andrey; Turuntaev, Sergey

    2017-04-01

    The Sakhalin Island is characterized by relatively high seismic activity. The largest measured earthquake of Mw=7.0 occurred in 1995 near the town of Neftegorsk. It was followed by a long-lasting aftershock sequence. Based on the results of our previous analysis of this aftershock sequence with the method of waveform cross correlation (WCC), we have recovered an aftershock sequence of the ML 6.1 earthquake occurred on August 14, 2016 at 11:15:13.1 (UTC). The epicentre of this earthquake estimated by near-regional data has geographic coordinates 50.351N i 142.395E, with the focal depth of 9 km. The aftershock catalogue compiled by the eqaler.ru resource includes 133 events within 20 days from the main shock. We used P- and S-wave signals from the main shock and a few largest aftershocks from the catalogue as waveform templates. Cross correlation of continuous waveforms with these templates was carried out at six closest seismic stations of the regional network, with four stations to northeast and two stations to southwest of the epicentre. For detection, we used standard STA/LTA method with thresholds depending on seismic phase and station. The accuracy of onset time estimation by the STA/LTA detector based on the obtained CC-traces is close to a few samples, with the sampling rate of 40 Hz at all stations. Arrival times of all detected signals were reduced to origin times using the observed travel times from the master-events to six stations. For a given master event, clusters of origin times are considered as event hypotheses in a local association procedure. When several master events find the same physical signal, we resolve conflict using the number of associated stations and then the RMS origin time residual. In total, more than 190 aftershocks were found with three and more associated stations and five and more associated phases. This is by 40% more than the number of aftershocks in the original catalogue. Their magnitudes vary between 1.5 and 4.5. We also

  8. PAGER-CAT: A composite earthquake catalog for calibrating global fatality models

    USGS Publications Warehouse

    Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.

    2009-01-01

    We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are

  9. The tsunami source area of the 2003 Tokachi-oki earthquake estimated from tsunami travel times and its relationship to the 1952 Tokachi-oki earthquake

    USGS Publications Warehouse

    Hirata, K.; Tanioka, Y.; Satake, K.; Yamaki, S.; Geist, E.L.

    2004-01-01

    We estimate the tsunami source area of the 2003 Tokachi-oki earthquake (Mw 8.0) from observed tsunami travel times at 17 Japanese tide gauge stations. The estimated tsunami source area (???1.4 ?? 104 km2) coincides with the western-half of the ocean-bottom deformation area (???2.52 ?? 104 km2) of the 1952 Tokachi-oki earthquake (Mw 8.1), previously inferred from tsunami waveform inversion. This suggests that the 2003 event ruptured only the western-half of the 1952 rupture extent. Geographical distribution of the maximum tsunami heights in 2003 differs significantly from that of the 1952 tsunami, supporting this hypothesis. Analysis of first-peak tsunami travel times indicates that a major uplift of the ocean-bottom occurred approximately 30 km to the NNW of the mainshock epicenter, just above a major asperity inferred from seismic waveform inversion. Copyright ?? The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences.

  10. Triggered creep as a possible mechanism for delayed dynamic triggering of tremor and earthquakes

    USGS Publications Warehouse

    Shelly, David R.; Peng, Zhigang; Hill, David P.; Aiken, Chastity

    2011-01-01

    The passage of radiating seismic waves generates transient stresses in the Earth's crust that can trigger slip on faults far away from the original earthquake source. The triggered fault slip is detectable in the form of earthquakes and seismic tremor. However, the significance of these triggered events remains controversial, in part because they often occur with some delay, long after the triggering stress has passed. Here we scrutinize the location and timing of tremor on the San Andreas fault between 2001 and 2010 in relation to distant earthquakes. We observe tremor on the San Andreas fault that is initiated by passing seismic waves, yet migrates along the fault at a much slower velocity than the radiating seismic waves. We suggest that the migrating tremor records triggered slow slip of the San Andreas fault as a propagating creep event. We find that the triggered tremor and fault creep can be initiated by distant earthquakes as small as magnitude 5.4 and can persist for several days after the seismic waves have passed. Our observations of prolonged tremor activity provide a clear example of the delayed dynamic triggering of seismic events. Fault creep has been shown to trigger earthquakes, and we therefore suggest that the dynamic triggering of prolonged fault creep could provide a mechanism for the delayed triggering of earthquakes. ?? 2011 Macmillan Publishers Limited. All rights reserved.

  11. Ionospheric Anomalies Related to the (M = 7.3), August 27, 2012, Puerto Earthquake, (M = 6.8), August 30, 2012 Jan Mayen Island Earthquake, and (M = 7.6), August 31, 2012, Philippines Earthquake: Two-Dimensional Principal Component Analysis

    PubMed Central

    Lin, Jyh-Woei

    2013-01-01

    Two-dimensional principal component analysis (2DPCA) and principal component analysis (PCA) are used to examine the ionospheric total electron content (TEC) data during the time period from 00:00 on August 21 to 12: 45 on August 31 (UT), which are 10 days before the M = 7.6 Philippines earthquake at 12:47:34 on August 31, 2012 (UT) with the depth at 34.9 km. From the results by using 2DPCA, a TEC precursor of Philippines earthquake is found during the time period from 4:25 to 4:40 on August 28, 2012 (UT) with the duration time of at least 15 minutes. Another earthquake-related TEC anomaly is detectable for the time period from 04:35 to 04:40 on August 27, 2012 (UT) with the duration time of at least 5 minutes during the Puerto earthquake at 04: 37:20 on August 27, 2012 (UT) (M w = 7.3) with the depth at 20.3 km. The precursor of the Puerto earthquake is not detectable. TEC anomaly is not to be found related to the Jan Mayen Island earthquake (M w = 6.8) at 13:43:24 on August 30, 2012 (UT). These earthquake-related TEC anomalies are detectable by using 2DPCA rather than PCA. They are localized nearby the epicenters of the Philippines and Puerto earthquakes. PMID:23844386

  12. Modelling the time-dependent frequency content of low-frequency volcanic earthquakes

    NASA Astrophysics Data System (ADS)

    Jousset, Philippe; Neuberg, Jürgen; Sturton, Susan

    2003-11-01

    Low-frequency volcanic earthquakes and tremor have been observed on seismic networks at a number of volcanoes, including Soufrière Hills volcano on Montserrat. Single events have well known characteristics, including a long duration (several seconds) and harmonic spectral peaks (0.2-5 Hz). They are commonly observed in swarms, and can be highly repetitive both in waveforms and amplitude spectra. As the time delay between them decreases, they merge into tremor, often preceding critical volcanic events like dome collapses or explosions. Observed amplitude spectrograms of long-period volcanic earthquake swarms may display gliding lines which reflect a time dependence in the frequency content. Using a magma-filled dyke embedded in a solid homogeneous half-space as a simplified volcanic structure, we employ a 2D finite-difference method to compute the propagation of seismic waves in the conduit and its vicinity. We successfully replicate the seismic wave field of a single low-frequency event, as well as the occurrence of events in swarms, their highly repetitive characteristics, and the time dependence of their spectral content. We use our model to demonstrate that there are two modes of conduit resonance, leading to two types of interface waves which are recorded at the free surface as surface waves. We also demonstrate that reflections from the top and the bottom of a conduit act as secondary sources that are recorded at the surface as repetitive low-frequency events with similar waveforms. We further expand our modelling to account for gradients in physical properties across the magma-solid interface. We also expand it to account for time dependence of magma properties, which we implement by changing physical properties within the conduit during numerical computation of wave propagation. We use our expanded model to investigate the amplitude and time scales required for modelling gliding lines, and show that changes in magma properties, particularly changes in the

  13. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    NASA Astrophysics Data System (ADS)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a

  14. Pre-Earthquake Unipolar Electromagnetic Pulses

    NASA Astrophysics Data System (ADS)

    Scoville, J.; Freund, F.

    2013-12-01

    Transient ultralow frequency (ULF) electromagnetic (EM) emissions have been reported to occur before earthquakes [1,2]. They suggest powerful transient electric currents flowing deep in the crust [3,4]. Prior to the M=5.4 Alum Rock earthquake of Oct. 21, 2007 in California a QuakeFinder triaxial search-coil magnetometer located about 2 km from the epicenter recorded unusual unipolar pulses with the approximate shape of a half-cycle of a sine wave, reaching amplitudes up to 30 nT. The number of these unipolar pulses increased as the day of the earthquake approached. These pulses clearly originated around the hypocenter. The same pulses have since been recorded prior to several medium to moderate earthquakes in Peru, where they have been used to triangulate the location of the impending earthquakes [5]. To understand the mechanism of the unipolar pulses, we first have to address the question how single current pulses can be generated deep in the Earth's crust. Key to this question appears to be the break-up of peroxy defects in the rocks in the hypocenter as a result of the increase in tectonic stresses prior to an earthquake. We investigate the mechanism of the unipolar pulses by coupling the drift-diffusion model of semiconductor theory to Maxwell's equations, thereby producing a model describing the rock volume that generates the pulses in terms of electromagnetism and semiconductor physics. The system of equations is then solved numerically to explore the electromagnetic radiation associated with drift-diffusion currents of electron-hole pairs. [1] Sharma, A. K., P. A. V., and R. N. Haridas (2011), Investigation of ULF magnetic anomaly before moderate earthquakes, Exploration Geophysics 43, 36-46. [2] Hayakawa, M., Y. Hobara, K. Ohta, and K. Hattori (2011), The ultra-low-frequency magnetic disturbances associated with earthquakes, Earthquake Science, 24, 523-534. [3] Bortnik, J., T. E. Bleier, C. Dunson, and F. Freund (2010), Estimating the seismotelluric current

  15. Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view

    NASA Astrophysics Data System (ADS)

    Locati, Mario; Rovida, Andrea; Albini, Paola

    2014-05-01

    Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.

  16. 2010 Chile Earthquake Aftershock Response

    NASA Astrophysics Data System (ADS)

    Barientos, Sergio

    2010-05-01

    1906? Since the number of M>7.0 aftershocks has been low, does the distribution of large-magnitude aftershocks differ from previous events of this size? What is the origin of the extensional-type aftershocks at shallow depths within the upper plate? The international seismological community (France, Germany, U.K., U.S.A.) in collaboration with the Chilean seismological community responded with a total of 140 portable seismic stations to deploy in order to record aftershocks. This combined with the Chilean permanent seismic network, in the area results in 180 stations now in operation recording continuous at 100 cps. The seismic equipment is a mix of accelerometers, short -period and broadband seismic sensors deployed along the entire length of the aftershock zone that will record the aftershock sequence for three to six months. The collected seismic data will be merged and archived to produce an international data set open to the entire seismological community immediately after archiving. Each international group will submit their data as soon as possible in standard (mini seed) format with accompanying meta data to the IRIS DMC where the data will be merged into a combined data set and available to individuals and other data centers. This will be by far the best-recorded aftershock sequence of a large megathrust earthquake. This outstanding international collaboration will provide an open data set for this important earthquake as well as provide a model for future aftershock deployments around the world.

  17. Real-Time Science on Social Media: The Example of Twitter in the Minutes, Hours, Days after the 2015 M7.8 Nepal Earthquake

    NASA Astrophysics Data System (ADS)

    Lomax, A.; Bossu, R.; Mazet-Roux, G.

    2015-12-01

    Scientific information on disasters such as earthquakes typically comes firstly from official organizations, news reports and interviews with experts, and later from scientific presentations and peer-reviewed articles. With the advent of the Internet and social media, this information is available in real-time from automated systems and within a dynamic, collaborative interaction between scientific experts, responders and the public. After the 2015 M7.8 Nepal earthquake, Twitter Tweets from earth scientists* included information, analysis, commentary and discussion on earthquake parameters (location, size, mechanism, rupture extent, high-frequency radiation, …), earthquake effects (distribution of felt shaking and damage, triggered seismicity, landslides, …), earthquake rumors (e.g. the imminence of a larger event) and other earthquake information and observations (aftershock forecasts, statistics and maps, source and regional tectonics, seismograms, GPS, InSAR, photos/videos, …).In the future (while taking into account security, false or erroneous information and identity verification), collaborative, real-time science on social media after a disaster will give earlier and better scientific understanding and dissemination of public information, and enable improved emergency response and disaster management.* A sample of scientific Tweets after the 2015 Nepal earthquake: In the first minutes: "mb5.9 Mwp7.4 earthquake Nepal 2015.04.25-06:11:25UTC", "Major earthquake shakes Nepal 8 min ago", "Epicenter between Pokhara and Kathmandu", "Major earthquake shakes Nepal 18 min ago. Effects derived from witnesses' reports". In the first hour: "shallow thrust faulting to North under Himalayas", "a very large and shallow event ... Mw7.6-7.7", "aftershocks extend east and south of Kathmandu, so likely ruptured beneath city", "Valley-blocking landslides must be a very real worry". In the first day: "M7.8 earthquake in Nepal 2hr ago: destructive in Kathmandu Valley and

  18. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    USGS Publications Warehouse

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  19. On the origin of diverse aftershock mechanisms following the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Kilb, Debi; Ellis, M.; Gomberg, J.; Davis, S.

    1997-01-01

    We test the hypothesis that the origin of the diverse suite of aftershock mechanisms following the 1989 M 7.1 Loma Prieta, California, earthquake is related to the post-main-shock static stress field. We use a 3-D boundary-element algorithm to calculate static stresses, combined with a Coulomb failure criterion to calculate conjugate failure planes at aftershock locations. The post-main-shock static stress field is taken as the sum of a pre-existing stress field and changes in stress due to the heterogeneous slip across the Loma Prieta rupture plane. The background stress field is assumed to be either a simple shear parallel to the regional trend of the San Andreas fault or approximately fault-normal compression. A suite of synthetic aftershock mechanisms from the conjugate failure planes is generated and quantitatively compared (allowing for uncertainties in both mechanism parameters and earthquake locations) to well-constrained mechanisms reported in the US Geological Survey Northern California Seismic Network catalogue. We also compare calculated rakes with those observed by resolving the calculated stress tensor onto observed focal mechanism nodal planes, assuming either plane to be a likely rupture plane. Various permutations of the assumed background stress field, frictional coefficients of aftershock fault planes, methods of comparisons, etc. explain between 52 and 92 per cent of the aftershock mechanisms. We can explain a similar proportion of mechanisms however by comparing a randomly reordered catalogue with the various suites of synthetic aftershocks. The inability to duplicate aftershock mechanisms reliably on a one-to-one basis is probably a function of the combined uncertainties in models of main-shock slip distribution, the background stress field, and aftershock locations. In particular we show theoretically that any specific main-shock slip distribution and a reasonable background stress field are able to generate a highly variable suite of failure

  20. Real-time and rapid GNSS solutions from the M8.2 September 2017 Tehuantepec Earthquake and implications for Earthquake and Tsunami Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Hodgkinson, K. M.; Mattioli, G. S.

    2017-12-01

    In support of hazard research and Earthquake Early Warning (EEW) Systems UNAVCO operates approximately 800 RT-GNSS stations throughout western North America and Alaska (EarthScope Plate Boundary Observatory), Mexico (TLALOCNet), and the pan-Caribbean region (COCONet). Our system produces and distributes raw data (BINEX and RTCM3) and real-time Precise Point Positions via the Trimble PIVOT Platform (RTX). The 2017-09-08 earthquake M8.2 located 98 km SSW of Tres Picos, Mexico is the first great earthquake to occur within the UNAVCO RT-GNSS footprint, which allows for a rigorous analysis of our dynamic and static processing methods. The need for rapid geodetic solutions ranges from seconds (EEW systems) to several minutes (Tsunami Warning and NEIC moment tensor and finite fault models). Here, we compare and quantify the relative processing strategies for producing static offsets, moment tensors and geodetically determined finite fault models using data recorded during this event. We also compare the geodetic solutions with the USGS NEIC seismically derived moment tensors and finite fault models, including displacement waveforms generated from these models. We define kinematic post-processed solutions from GIPSY-OASISII (v6.4) with final orbits and clocks as a "best" case reference to evaluate the performance of our different processing strategies. We find that static displacements of a few centimeters or less are difficult to resolve in the real-time GNSS position estimates. The standard daily 24-hour solutions provide the highest-quality data-set to determine coseismic offsets, but these solutions are delayed by at least 48 hours after the event. Dynamic displacements, estimated in real-time, however, show reasonable agreement with final, post-processed position estimates, and while individual position estimates have large errors, the real-time solutions offer an excellent operational option for EEW systems, including the use of estimated peak-ground displacements or

  1. Accretionary nature of the crust of Central and East Java (Indonesia) revealed by local earthquake travel-time tomography

    NASA Astrophysics Data System (ADS)

    Haberland, Christian; Bohm, Mirjam; Asch, Günter

    2014-12-01

    Reassessment of travel time data from an exceptionally dense, amphibious, temporary seismic network on- and offshore Central and Eastern Java (MERAMEX) confirms the accretionary nature of the crust in this segment of the Sunda subduction zone (109.5-111.5E). Traveltime data of P- and S-waves of 244 local earthquakes were tomographically inverted, following a staggered inversion approach. The resolution of the inversion was inspected by utilizing synthetic recovery tests and analyzing the model resolution matrix. The resulting images show a highly asymmetrical crustal structure. The images can be interpreted to show a continental fragment of presumably Gondwana origin in the coastal area (east of 110E), which has been accreted to the Sundaland margin. An interlaced anomaly of high seismic velocities indicating mafic material can be interpreted to be the mantle part of the continental fragment, or part of obducted oceanic lithosphere. Lower than average crustal velocities of the Java crust are likely to reflect ophiolitic and metamorphic rocks of a subduction melange.

  2. Electromagnetic Energy Released in the Subduction (Benioff) Zone in Weeks Previous to Earthquake Occurrence in Central Peru and the Estimation of Earthquake Magnitudes.

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Centa, V. A.; Bleier, T.

    2017-12-01

    During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event

  3. Earthquake Early Warning in Japan - Result of recent two years -

    NASA Astrophysics Data System (ADS)

    Shimoyama, T.; Doi, K.; Kiyomoto, M.; Hoshiba, M.

    2009-12-01

    Japan Meteorological Agency(JMA) started to provide Earthquake Early Warning(EEW) to the general public in October 2007. It was followed by provision of EEW to a limited number of users who understand the technical limit of EEW and can utilize it for automatic control from August 2006. Earthquake Early Warning in Japan definitely means information of estimated amplitude and arrival time of a strong ground motion after fault rupture occurred. In other words, the EEW provided by JMA is defined as a forecast of a strong ground motion before the strong motion arrival. EEW of JMA is to enable advance countermeasures to disasters caused by strong ground motions with providing a warning message of anticipating strong ground motion before the S wave arrival. However, due to its very short available time period, there should need some measures and ideas to provide rapidly EEW and utilize it properly. - EEW is issued to general public when the maximum seismic intensity 5 lower (JMA scale) or greater is expected. - EEW message contains origin time, epicentral region name, and names of areas (unit is about 1/3 to 1/4 of one prefecture) where seismic intensity 4 or greater is expected. Expected arrival time is not included because it differs substantially even in one unit area. - EEW is to be broadcast through the broadcasting media(TV, radio and City Administrative Disaster Management Radio), and is delivered to cellular phones through cell broadcast system. For those who would like to know the more precise estimation and smaller earthquake information at their point of their properties, JMA allows designated private companies to provide forecast of strong ground motion, in which the estimation of a seismic intensity as well as arrival time of S-wave are contained, at arbitrary places under the JMA’s technical assurance. From October, 2007 to August, 2009, JMA issued 11 warnings to general public expecting seismic intensity “5 lower” or greater, including M=7.2 inland

  4. Bi-directional volcano-earthquake interaction at Mauna Loa Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, T. R.; Amelung, F.

    2004-12-01

    At Mauna Loa volcano, Hawaii, large-magnitude earthquakes occur mostly at the west flank (Kona area), at the southeast flank (Hilea area), and at the east flank (Kaoiki area). Eruptions at Mauna Loa occur mostly at the summit region and along fissures at the southwest rift zone (SWRZ), or at the northeast rift zone (NERZ). Although historic earthquakes and eruptions at these zones appear to correlate in space and time, the mechanisms and implications of an eruption-earthquake interaction was not cleared. Our analysis of available factual data reveals the highly statistical significance of eruption-earthquake pairs, with a random probability of 5-to-15 percent. We clarify this correlation with the help of elastic stress-field models, where (i) we simulate earthquakes and calculate the resulting normal stress change at volcanic active zones of Mauna Loa, and (ii) we simulate intrusions in Mauna Loa and calculate the Coulomb stress change at the active fault zones. Our models suggest that Hilea earthquakes encourage dike intrusion in the SWRZ, Kona earthquakes encourage dike intrusion at the summit and in the SWRZ, and Kaoiki earthquakes encourage dike intrusion in the NERZ. Moreover, a dike in the SWRZ encourages earthquakes in the Hilea and Kona areas. A dike in the NERZ may encourage and discourage earthquakes in the Hilea and Kaoiki areas. The modeled stress change patterns coincide remarkably with the patterns of several historic eruption-earthquake pairs, clarifying the mechanisms of bi-directional volcano-earthquake interaction for Mauna Loa. The results imply that at Mauna Loa volcanic activity influences the timing and location of earthquakes, and that earthquakes influence the timing, location and the volume of eruptions. In combination with near real-time geodetic and seismic monitoring, these findings may improve volcano-tectonic risk assessment.

  5. Simulating Earthquake Early Warning Systems in the Classroom as a New Approach to Teaching Earthquakes

    NASA Astrophysics Data System (ADS)

    D'Alessio, M. A.

    2010-12-01

    A discussion of P- and S-waves seems an ubiquitous part of studying earthquakes in the classroom. Textbooks from middle school through university level typically define the differences between the waves and illustrate the sense of motion. While many students successfully memorize the differences between wave types (often utilizing the first letter as a memory aide), textbooks rarely give tangible examples of how the two waves would "feel" to a person sitting on the ground. One reason for introducing the wave types is to explain how to calculate earthquake epicenters using seismograms and travel time charts -- very abstract representations of earthquakes. Even when the skill is mastered using paper-and-pencil activities or one of the excellent online interactive versions, locating an epicenter simply does not excite many of our students because it evokes little emotional impact, even in students located in earthquake-prone areas. Despite these limitations, huge numbers of students are mandated to complete the task. At the K-12 level, California requires that all students be able to locate earthquake epicenters in Grade 6; in New York, the skill is a required part of the Regent's Examination. Recent innovations in earthquake early warning systems around the globe give us the opportunity to address the same content standard, but with substantially more emotional impact on students. I outline a lesson about earthquakes focused on earthquake early warning systems. The introductory activities include video clips of actual earthquakes and emphasize the differences between the way P- and S-waves feel when they arrive (P arrives first, but is weaker). I include an introduction to the principle behind earthquake early warning (including a summary of possible uses of a few seconds warning about strong shaking) and show examples from Japan. Students go outdoors to simulate P-waves, S-waves, and occupants of two different cities who are talking to one another on cell phones

  6. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  7. The global distribution of magnitude 9 earthquakes

    NASA Astrophysics Data System (ADS)

    McCaffrey, R.

    2011-12-01

    The 2011 Tohoku M9 earthquake once again caught some in the earthquake community by surprise. The expectation of these massive quakes has been driven in the past by the over-reliance on our short, incomplete history of earthquakes and causal relationships derived from it. The logic applied is that if a great earthquake has not happened in the past, that we know of, one cannot happen in the future. Using the ~100-year global earthquake history, seismologists have promoted relationships between maximum earthquake sizes and other properties of subduction zones, leading to the notion that some subduction zones, like the Japan Trench, would never produce a magnitude ~9 event. The 2004 Andaman Mw = 9.2 earthquake, that occurred where there is slow subduction of old crust and a history of only moderate-sized earthquakes, seriously undermined such ideas. Given multi-century return times of the greatest earthquakes, ignorance of those return times and our very limited observation span, I suggest that we cannot yet make such determinations. Alternatively, using the length of a subduction zone that is available for slip as the predominant factor in determining maximum earthquake size, we cannot rule out that any subduction zone of a few hundred kilometers or more in length may be capable of producing a magnitude 9 or larger earthquake. Based on this method, the expected maximum size for the Japan Trench was 9.0 (McCaffrey, Geology, p. 263, 2008). The same approach portends a M > 9 for Java, with twice the population density as Honshu and much lower building standards. The Java Trench, and others where old crust subducts (Hikurangi, Marianas, Tonga, Kermadec), require increased awareness of the possibility for a great earthquake.

  8. Seismicity associated with the Sumatra-Andaman Islands earthquake of 26 December 2004

    USGS Publications Warehouse

    Dewey, J.W.; Choy, G.; Presgrave, B.; Sipkin, S.; Tarr, A.C.; Benz, H.; Earle, P.; Wald, D.

    2007-01-01

    The U.S. Geological Survey/National Earthquake Information Center (USGS/ NEIC) had computed origins for 5000 earthquakes in the Sumatra-Andaman Islands region in the first 36 weeks after the Sumatra-Andaman Islands mainshock of 26 December 2004. The cataloging of earthquakes of mb (USGS) 5.1 and larger is essentially complete for the time period except for the first half-day following the 26 December mainshock, a period of about two hours following the Nias earthquake of 28 March 2005, and occasionally during the Andaman Sea swarm of 26-30 January 2005. Moderate and larger (mb ???5.5) aftershocks are absent from most of the deep interplate thrust faults of the segments of the Sumatra-Andaman Islands subduction zone on which the 26 December mainshock occurred, which probably reflects nearly complete release of elastic strain on the seismogenic interplate-thrust during the mainshock. An exceptional thrust-fault source offshore of Banda Aceh may represent a segment of the interplate thrust that was bypassed during the mainshock. The 26 December mainshock triggered a high level of aftershock activity near the axis of the Sunda trench and the leading edge of the overthrust Burma plate. Much near-trench activity is intraplate activity within the subducting plate, but some shallow-focus, near-trench, reverse-fault earthquakes may represent an unusual seismogenic release of interplate compressional stress near the tip of the overriding plate. The interplate-thrust Nias earthquake of 28 March 2005, in contrast to the 26 December aftershock sequence, was followed by many interplate-thrust aftershocks along the length of its inferred rupture zone.

  9. Sounds of earthquakes in West Bohemia: analysis of sonic and infrasonic records

    NASA Astrophysics Data System (ADS)

    Fischer, Tomáš; Vilhelm, Jan; Kuna, Václav; Chum, Jaroslav; Horálek, Josef

    2013-04-01

    Earthquake sounds are usually observed during the occurrence of small earthquakes. The observations of audible manifestations of earthquakes date back to the ancient age and have been recently analyzed in more detail based both on macroseismic observations and audio recordings. In most cases the earthquake sounds resemble low-frequency underground thundering that is generated by seismic-acoustic conversion of P and SV waves at the earth surface. This is also supported by the fact that earthquake sounds usually precede shaking caused by S-waves. The less frequent are explosion-type sounds whose origin remains unclear. We analyze the observations of sounds associating the occurrence of earthquake swarms in the area of West Bohemia/Vogtland, Central Europe. Macroseismic data include 250 reports of sounds with 90% thundering and 10% of explosions. Additional data consist of sonic and infrasonic records acquired by microphones and microbarographs at seismic stations in the area. All the sonic and infrasonic records correspond to sounds of the thunder type; no explosions were recorded. Comparison of these records enabled to determine the seismic wave - air pressure transfer function. The measurements using a 3D microphone array confirm that in the epicentral area the sonic wave is propagating subvertically. We also compared the coda of seismograms and sonic records. It turned out that additional to seismo-acoustic coupling, a later acoustic wave of thunder type arrives at the observation site whose arrival time corresponds to sonic propagation from the epicenter. We analyse the possible generation mechanisms of this type of sonic wave.

  10. Understanding Earthquake Hazard & Disaster in Himalaya - A Perspective on Earthquake Forecast in Himalayan Region of South Central Tibet

    NASA Astrophysics Data System (ADS)

    Shanker, D.; Paudyal, ,; Singh, H.

    2010-12-01

    characterized by an extremely high annual earthquake frequency as compared to the preceding normal and the following gap episodes, and is the characteristics of the events in such an episode is causally related with the magnitude and the time of occurrence of the forthcoming earthquake. It is observed here that for the shorter duration of the preparatory time period, there will be the smaller mainshock, and vice-versa. The Western Nepal and the adjoining Tibet region are potential for the future medium size earthquakes. Accordingly, it has been estimated here that an earthquake with M 6.5 ± 0.5 may occur at any time from now onwards till December 2011 in the Western Nepal within an area bounded by 29.3°-30.5° N and 81.2°-81.9° E, in the focal depth range 10 -30 km.

  11. An Earthquake Rupture Forecast model for central Italy submitted to CSEP project

    NASA Astrophysics Data System (ADS)

    Pace, B.; Peruzza, L.

    2009-04-01

    We defined a seismogenic source model for central Italy and computed the relative forecast scenario, in order to submit the results to the CSEP (Collaboratory for the study of Earthquake Predictability, www.cseptesting.org) project. The goal of CSEP project is developing a virtual, distributed laboratory that supports a wide range of scientific prediction experiments in multiple regional or global natural laboratories, and Italy is the first region in Europe for which fully prospective testing is planned. The model we propose is essentially the Layered Seismogenic Source for Central Italy (LaSS-CI) we published in 2006 (Pace et al., 2006). It is based on three different layers of sources: the first one collects the individual faults liable to generate major earthquakes (M >5.5); the second layer is given by the instrumental seismicity analysis of the past two decades, which allows us to evaluate the background seismicity (M ~<5.0). The third layer utilizes all the instrumental earthquakes and the historical events not correlated to known structures (4.5time-dependent hypothesis has been introduced for some individual sources, computing the conditional probability of occurrence of characteristic earthquakes by Brownian passage time distribution. Beside the original model, updated earthquake rupture forecasts only for individual sources are released too, in the light of recent analyses (Peruzza et al., 2008; Zoeller et al., 2008). We computed forecasts based on the LaSS-CI model for two time-windows: 5 and 10 years. Each model to be tested defines a forecasted earthquake rate in magnitude bins of 0.1 unit steps in the range M5-9, for the periods 1st April 2009 to 1st April 2014, and 1st April 2009 to 1st April 2019. B. Pace, L. Peruzza, G. Lavecchia, and P. Boncio (2006) Layered Seismogenic Source

  12. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  13. A note on evaluating VAN earthquake predictions

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-Akis; Melis, Nicos S.

    The evaluation of the success level of an earthquake prediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquake prediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquake predictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

  14. Introduction to the special issue on the 2004 Parkfield earthquake and the Parkfield earthquake prediction experiment

    USGS Publications Warehouse

    Harris, R.A.; Arrowsmith, J.R.

    2006-01-01

    The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield Earthquake Prediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

  15. Catastrophic valley fills record large Himalayan earthquakes, Pokhara, Nepal

    NASA Astrophysics Data System (ADS)

    Stolle, Amelie; Bernhardt, Anne; Schwanghart, Wolfgang; Hoelzmann, Philipp; Adhikari, Basanta R.; Fort, Monique; Korup, Oliver

    2017-12-01

    Uncertain timing and magnitudes of past mega-earthquakes continue to confound seismic risk appraisals in the Himalayas. Telltale traces of surface ruptures are rare, while fault trenches document several events at best, so that additional proxies of strong ground motion are needed to complement the paleoseismological record. We study Nepal's Pokhara basin, which has the largest and most extensively dated archive of earthquake-triggered valley fills in the Himalayas. These sediments form a 148-km2 fan that issues from the steep Seti Khola gorge in the Annapurna Massif, invading and plugging 15 tributary valleys with tens of meters of debris, and impounding several lakes. Nearly a dozen new radiocarbon ages corroborate at least three episodes of catastrophic sedimentation on the fan between ∼700 and ∼1700 AD, coinciding with great earthquakes in ∼1100, 1255, and 1344 AD, and emplacing roughly >5 km3 of debris that forms the Pokhara Formation. We offer a first systematic sedimentological study of this formation, revealing four lithofacies characterized by thick sequences of mid-fan fluvial conglomerates, debris-flow beds, and fan-marginal slackwater deposits. New geochemical provenance analyses reveal that these upstream dipping deposits of Higher Himalayan origin contain lenses of locally derived river clasts that mark time gaps between at least three major sediment pulses that buried different parts of the fan. The spatial pattern of 14C dates across the fan and the provenance data are key to distinguishing these individual sediment pulses, as these are not evident from their sedimentology alone. Our study demonstrates how geomorphic and sedimentary evidence of catastrophic valley infill can help to independently verify and augment paleoseismological fault-trench records of great Himalayan earthquakes, while offering unparalleled insights into their long-term geomorphic impacts on major drainage basins.

  16. Influence of Earthquake Parameters on Tsunami Wave Height and Inundation

    NASA Astrophysics Data System (ADS)

    Kulangara Madham Subrahmanian, D.; Sri Ganesh, J.; Venkata Ramana Murthy, M.; V, R. M.

    2014-12-01

    After Indian Ocean Tsunami (IOT) on 26th December, 2004, attempts are being made to assess the threat of tsunami originating from different sources for different parts of India. The Andaman - Sumatra trench is segmented by transcurrent faults and differences in the rate of subduction which is low in the north and increases southward. Therefore key board model with initial deformation calculated using different strike directions, slip rates, are used. This results in uncertainties in the earthquake parameters. This study is made to identify the location of origin of most destructive tsunami for Southeast coast of India and to infer the influence of the earthquake parameters in tsunami wave height travel time in deep ocean as well as in the shelf and inundation in the coast. Five tsunamigenic sources were considered in the Andaman - Sumatra trench taking into consideration the tectonic characters of the trench described by various authors and the modeling was carried out using TUNAMI N2 code. The model results were validated using the travel time and runup in the coastal areas and comparing the water elevation along Jason - 1's satellite track. The inundation results are compared from the field data. The assessment of the tsunami threat for the area south of Chennai city the metropolitan city of South India shows that a tsunami originating in Car Nicobar segment of the Andaman - Sumatra subduction zone can generate the most destructive tsunami. Sensitivity analysis in the modelling indicates that fault length influences the results significantly and the tsunami reaches early and with higher amplitude. Strike angle is also modifying the tsunami followed by amount of slip.

  17. St. Louis Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Williams, Robert A.; Steckel, Phyllis; Schweig, Eugene

    2007-01-01

    St. Louis has experienced minor earthquake damage at least 12 times in the past 200 years. Because of this history and its proximity to known active earthquake zones, the St. Louis Area Earthquake Hazards Mapping Project will produce digital maps that show variability of earthquake hazards in the St. Louis area. The maps will be available free via the internet. They can be customized by the user to show specific areas of interest, such as neighborhoods or transportation routes.

  18. The Pocatello Valley, Idaho, earthquake

    USGS Publications Warehouse

    Rogers, A. M.; Langer, C.J.; Bucknam, R.C.

    1975-01-01

    A Richter magnitude 6.3 earthquake occurred at 8:31 p.m mountain daylight time on March 27, 1975, near the Utah-Idaho border in Pocatello Valley. The epicenter of the main shock was located at 42.094° N, 112.478° W, and had a focal depth of 5.5 km. This earthquake was the largest in the continental United States since the destructive San Fernando earthquake of February 1971. The main shock was preceded by a magnitude 4.5 foreshock on March 26. 

  19. Earthquake behavior along the Levant fault from paleoseismology (Invited)

    NASA Astrophysics Data System (ADS)

    Klinger, Y.; Le Beon, M.; Wechsler, N.; Rockwell, T. K.

    2013-12-01

    The Levant fault is a major continental structure 1200 km-long that bounds the Arabian plate to the west. The finite offset of this left-lateral strike-slip fault is estimated to be 105 km for the section located south of the restraining bend corresponding roughly to Lebanon. Along this southern section the slip-rate has been estimated over a large range of time scales, from few years to few hundreds thousands of years. Over these different time scales, studies agree for the slip-rate to be 5mm/yr × 2 mm/yr. The southern section of the Levant fault is particularly attractive to study earthquake behavior through time for several reasons: 1/ The fault geometry is simple and well constrained. 2/ The fault system is isolated and does not interact with obvious neighbor fault systems. 3/ The Middle-East, where the Levant fault is located, is the region in the world where one finds the longest and most complete historical record of past earthquakes. About 30 km north of the city of Aqaba, we opened a trench in the southern part of the Yotvata playa, along the Wadi Araba fault segment. The stratigraphy presents silty sand playa units alternating with coarser sand sediments from alluvial fans flowing westwards from the Jordan plateau. Two fault zones can be recognized in the trench and a minimum of 8 earthquakes can be identified, based on upward terminations of ground ruptures. Dense 14C dating through the entire exposure allows matching the 4 most recent events with historical events in AD1458, AD1212, AD1068 and AD748. Size of the ground rupture suggests a bi-modal distribution of earthquakes with earthquakes rupturing the entire Wadi Araba segment and earthquakes ending in the extensional jog forming the playa. Timing of earthquakes shows that no earthquakes occurred at this site since about 600 years, suggesting earthquake clustering along this section of the fault and potential for a large earthquake in the near future. 3D paleoseismological trenches at the Beteiha

  20. Large Earthquakes Disrupt Groundwater System by Breaching Aquitards

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.; Manga, M.; Liao, X.; Wang, L. P.

    2016-12-01

    Changes of groundwater system by large earthquakes are widely recognized. Some changes have been attributed to increases in the vertical permeability but basic questions remain: How do increases in the vertical permeability occur? How frequent do they occur? How fast does the vertical permeability recover after the earthquake? Is there a quantitative measure for detecting the occurrence of aquitard breaching? Here we attempt to answer these questions by examining data accumulated in the past 15 years. Analyses of increased stream discharges and their geochemistry after large earthquakes show evidence that the excess water originates from groundwater released from high elevations by large increase of the vertical permeability. Water-level data from a dense network of clustered wells in a sedimentary basin near the epicenter of the 1999 M7.6 Chi-Chi earthquake in western Taiwan show that, while most confined aquifers remained confined after the earthquake, about 10% of the clustered wells show evidence of coseismic breaching of aquitards and a great increase of the vertical permeability. Water level in wells without evidence of coseismic breaching of aquitards show similar tidal response before and after the earthquake; wells with evidence of coseismic breaching of aquitards, on the other hand, show distinctly different tidal response before and after the earthquake and that the aquifers became hydraulically connected for many months thereafter. Breaching of aquitards by large earthquakes has significant implications for a number of societal issues such as the safety of water resources, the security of underground waste repositories, and the production of oil and gas. The method demonstrated here may be used for detecting the occurrence of aquitard breaching by large earthquakes in other seismically active areas.

  1. Genetic algorithm for TEC seismo-ionospheric anomalies detection around the time of the Solomon (Mw = 8.0) earthquake of 06 February 2013

    NASA Astrophysics Data System (ADS)

    Akhoondzadeh, M.

    2013-08-01

    On 6 February 2013, at 12:12:27 local time (01:12:27 UTC) a seismic event registering Mw 8.0 struck the Solomon Islands, located at the boundaries of the Australian and Pacific tectonic plates. Time series prediction is an important and widely interesting topic in the research of earthquake precursors. This paper describes a new computational intelligence approach to detect the unusual variations of the total electron content (TEC) seismo-ionospheric anomalies induced by the powerful Solomon earthquake using genetic algorithm (GA). The GA detected a considerable number of anomalous occurrences on earthquake day and also 7 and 8 days prior to the earthquake in a period of high geomagnetic activities. In this study, also the detected TEC anomalies using the proposed method are compared to the results dealing with the observed TEC anomalies by applying the mean, median, wavelet, Kalman filter, ARIMA, neural network and support vector machine methods. The accordance in the final results of all eight methods is a convincing indication for the efficiency of the GA method. It indicates that GA can be an appropriate non-parametric tool for anomaly detection in a non linear time series showing the seismo-ionospheric precursors variations.

  2. Stress triggering of the 1999 Hector Mine earthquake by transient deformation following the 1992 Landers earthquake

    USGS Publications Warehouse

    Pollitz, F.F.; Sacks, I.S.

    2002-01-01

    The M 7.3 June 28, 1992 Landers and M 7.1 October 16, 1999 Hector Mine earthquakes, California, both right lateral strike-slip events on NNW-trending subvertical faults, occurred in close proximity in space and time in a region where recurrence times for surface-rupturing earthquakes are thousands of years. This suggests a causal role for the Landers earthquake in triggering the Hector Mine earthquake. Previous modeling of the static stress change associated with the Landers earthquake shows that the area of peak Hector Mine slip lies where the Coulomb failure stress promoting right-lateral strike-slip failure was high, but the nucleation point of the Hector Mine rupture was neutrally to weakly promoted, depending on the assumed coefficient of friction. Possible explanations that could account for the 7-year delay between the two ruptures include background tectonic stressing, dissipation of fluid pressure gradients, rate- and state-dependent friction effects, and post-Landers viscoelastic relaxation of the lower crust and upper mantle. By employing a viscoelastic model calibrated by geodetic data collected during the time period between the Landers and Hector Mine events, we calculate that postseismic relaxation produced a transient increase in Coulomb failure stress of about 0.7 bars on the impending Hector Mine rupture surface. The increase is greatest over the broad surface that includes the 1999 nucleation point and the site of peak slip further north. Since stress changes of magnitude greater than or equal to 0.1 bar are associated with documented causal fault interactions elsewhere, viscoelastic relaxation likely contributed to the triggering of the Hector Mine earthquake. This interpretation relies on the assumption that the faults occupying the central Mojave Desert (i.e., both the Landers and Hector Mine rupturing faults) were critically stressed just prior to the Landers earthquake.

  3. Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability.

    PubMed

    Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco

    2018-03-28

    Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.

  4. Earthquake recovery of historic buildings: exploring cost and time needs.

    PubMed

    Al-Nammari, Fatima M; Lindell, Michael K

    2009-07-01

    Disaster recovery of historic buildings has rarely been investigated even though the available literature indicates that they face special challenges. This study examines buildings' recovery time and cost to determine whether their functions (that is, their use) and their status (historic or non-historic) affect these outcomes. The study uses data from the city of San Francisco after the 1989 Loma Prieta earthquake to examine the recovery of historic buildings owned by public agencies and non-governmental organisations. The results show that recovery cost is affected by damage level, construction type and historic status, whereas recovery time is affected by the same variables and also by building function. The study points to the importance of pre-incident recovery planning, especially for building functions that have shown delayed recovery. Also, the study calls attention to the importance of further investigations into the challenges facing historic building recovery.

  5. Analysis of the Seismicity Preceding Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2016-12-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes.In this work, we investigate empirically on this specific aspect, exploring whether spatial-temporal variations in seismicity encode some information on the magnitude of the future earthquakes. For this purpose, and to verify the universality of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, and the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Zaliapin (2013) to distinguish triggered and background earthquakes, using the nearest-neighbor clustering analysis in a two-dimension plan defined by rescaled time and space. In particular, we generalize the metric based on the nearest-neighbor to a metric based on the k-nearest-neighbors clustering analysis that allows us to consider the overall space-time-magnitude distribution of k-earthquakes (k-foreshocks) which anticipate one target event (the mainshock); then we analyze the statistical properties of the clusters identified in this rescaled space. In essence, the main goal of this study is to verify if different classes of mainshock magnitudes are characterized by distinctive k-foreshocks distribution. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  6. Spatial Distribution of earthquakes off the coast of Fukushima Two Years after the M9 Earthquake: the Southern Area of the 2011 Tohoku Earthquake Rupture Zone

    NASA Astrophysics Data System (ADS)

    Yamada, T.; Nakahigashi, K.; Shinohara, M.; Mochizuki, K.; Shiobara, H.

    2014-12-01

    Huge earthquakes cause vastly stress field change around the rupture zones, and many aftershocks and other related geophysical phenomenon such as geodetic movements have been observed. It is important to figure out the time-spacious distribution during the relaxation process for understanding the giant earthquake cycle. In this study, we pick up the southern rupture area of the 2011 Tohoku earthquake (M9.0). The seismicity rate keeps still high compared with that before the 2011 earthquake. Many studies using ocean bottom seismometers (OBSs) have been doing since soon after the 2011 Tohoku earthquake in order to obtain aftershock activity precisely. Here we show one of the studies at off the coast of Fukushima which is located on the southern part of the rupture area caused by the 2011 Tohoku earthquake. We deployed 4 broadband type OBSs (BBOBSs) and 12 short-period type OBSs (SOBS) in August 2012. Other 4 BBOBSs attached with absolute pressure gauges and 20 SOBSs were added in November 2012. We recovered 36 OBSs including 8 BBOBSs in November 2013. We selected 1,000 events in the vicinity of the OBS network based on a hypocenter catalog published by the Japan Meteorological Agency, and extracted the data after time corrections caused by each internal clock. Each P and S wave arrival times, P wave polarity and maximum amplitude were picked manually on a computer display. We assumed one dimensional velocity structure based on the result from an active source experiment across our network, and applied time corrections every station for removing ambiguity of the assumed structure. Then we adopted a maximum-likelihood estimation technique and calculated the hypocenters. The results show that intensive activity near the Japan Trench can be seen, while there was a quiet seismic zone between the trench zone and landward high activity zone.

  7. Modified Mercalli Intensity for scenario earthquakes in Evansville, Indiana

    USGS Publications Warehouse

    Cramer, Chris; Haase, Jennifer; Boyd, Oliver

    2012-01-01

    Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the fact that Evansville is close to the Wabash Valley and New Madrid seismic zones, there is concern about the hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake. Earthquake-hazard maps provide one way of conveying such estimates of strong ground shaking and will help the region prepare for future earthquakes and reduce earthquake-caused losses.

  8. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  9. Moderate-magnitude earthquakes induced by magma reservoir inflation at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Wauthier, Christelle; Roman, Diana C.; Poland, Michael P.

    2013-01-01

    Although volcano-tectonic (VT) earthquakes often occur in response to magma intrusion, it is rare for them to have magnitudes larger than ~M4. On 24 May 2007, two shallow M4+ earthquakes occurred beneath the upper part of the east rift zone of Kīlauea Volcano, Hawai‘i. An integrated analysis of geodetic, seismic, and field data, together with Coulomb stress modeling, demonstrates that the earthquakes occurred due to strike-slip motion on pre-existing faults that bound Kīlauea Caldera to the southeast and that the pressurization of Kīlauea's summit magma system may have been sufficient to promote faulting. For the first time, we infer a plausible origin to generate rare moderate-magnitude VTs at Kīlauea by reactivation of suitably oriented pre-existing caldera-bounding faults. Rare moderate- to large-magnitude VTs at Kīlauea and other volcanoes can therefore result from reactivation of existing fault planes due to stresses induced by magmatic processes.

  10. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    NASA Astrophysics Data System (ADS)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  11. Large earthquakes and creeping faults

    USGS Publications Warehouse

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  12. Local Earthquake Tomography in the Eifel Region, Middle Europe

    NASA Astrophysics Data System (ADS)

    Gaensicke, H.

    2001-12-01

    The aim of the Eifel Plume project is to verify the existence of an assumed mantle plume responsible for the Tertiary and Quaternary volcanism in the Eifel region of midwest Germany. During a large passive and semi-active seismological experiment (November 1997 - June 1998) about 160 mobil broadband and short period stations were operated in addition to about 100 permanent stations in the area of interest. The stations registered teleseismic and local events. Local events are used to obtain a threedimensional tomographic model of seismic velocities in the crust. Since local earthquake tomography requires a large set of crustal travel paths, seismograms of local events recorded from July 1998 to June 2001 by permanent stations were added to the Eifel Plume data set. In addition to travel time corrections for the teleseismic tomography of the upper mantle, the new 3D velocity model should improve the precision for location of local events. From a total of 832 local seismic events, 172 were identified as tectonic earthquakes. The other events were either quarry blasts or shallow mine-induced seismic events. The locations of 60 quarry blasts are known and for 30 of them the firing time was measured during the field experiment. Since the origin time and location of these events are known with high precision, they are used to validate inverted velocity models. Station corrections from simultaneous 1D-inversion of local earthquake traveltimes and hypocenters are in good agreement with travel time residuals calculated from teleseismic rays. A strong azimuthal dependency of travel time residuals resulting from a 1D velocity model was found for quarry blasts with hypocenters in the volcanic field in the center of the Eifel. Simultaneous 3D-inversion calculations show strong heterogeneities in the upper crust and a negative anomaly for p-wave velocities in the lower crust. The latter either could indicate a low velocity zone close to the Moho or subsidence of the Moho. We

  13. Earthquake and Tsunami booklet based on two Indonesia earthquakes

    NASA Astrophysics Data System (ADS)

    Hayashi, Y.; Aci, M.

    2014-12-01

    Many destructive earthquakes occurred during the last decade in Indonesia. These experiences are very important precepts for the world people who live in earthquake and tsunami countries. We are collecting the testimonies of tsunami survivors to clarify successful evacuation process and to make clear the characteristic physical behaviors of tsunami near coast. We research 2 tsunami events, 2004 Indian Ocean tsunami and 2010 Mentawai slow earthquake tsunami. Many video and photographs were taken by people at some places in 2004 Indian ocean tsunami disaster; nevertheless these were few restricted points. We didn't know the tsunami behavior in another place. In this study, we tried to collect extensive information about tsunami behavior not only in many places but also wide time range after the strong shake. In Mentawai case, the earthquake occurred in night, so there are no impressive photos. To collect detail information about evacuation process from tsunamis, we contrived the interview method. This method contains making pictures of tsunami experience from the scene of victims' stories. In 2004 Aceh case, all survivors didn't know tsunami phenomena. Because there were no big earthquakes with tsunami for one hundred years in Sumatra region, public people had no knowledge about tsunami. This situation was highly improved in 2010 Mentawai case. TV programs and NGO or governmental public education programs about tsunami evacuation are widespread in Indonesia. Many people know about fundamental knowledge of earthquake and tsunami disasters. We made drill book based on victim's stories and painted impressive scene of 2 events. We used the drill book in disaster education event in school committee of west Java. About 80 % students and teachers evaluated that the contents of the drill book are useful for correct understanding.

  14. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2005

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Estes, Steve; McNutt, Stephen R.

    2006-01-01

    description of earthquake detection, recording, analysis, and data archival systems; (3) a description of seismic velocity models used for earthquake locations; (4) a summary of earthquakes located in 2005; and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, and location quality statistics; daily station usage statistics; and all HYPOELLIPSE files used to determine the earthquake locations in 2005.

  15. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2003

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Moran, Seth C.; Sanchez, John J.; McNutt, Stephen R.; Estes, Steve; Paskievitch, John

    2004-01-01

    description of earthquake detection, recording, analysis, and data archival systems; (3) a description of velocity models used for earthquake locations; (4) a summary of earthquakes located in 2003; and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, and location quality statistics; daily station usage statistics; and all HYPOELLIPSE files used to determine the earthquake locations in 2003.

  16. A smartphone application for earthquakes that matter!

    NASA Astrophysics Data System (ADS)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    Smartphone applications have swiftly become one of the most popular tools for rapid reception of earthquake information for the public, some of them having been downloaded more than 1 million times! The advantages are obvious: wherever someone's own location is, they can be automatically informed when an earthquake has struck. Just by setting a magnitude threshold and an area of interest, there is no longer the need to browse the internet as the information reaches you automatically and instantaneously! One question remains: are the provided earthquake notifications always relevant for the public? What are the earthquakes that really matters to laypeople? One clue may be derived from some newspaper reports that show that a while after damaging earthquakes many eyewitnesses scrap the application they installed just after the mainshock. Why? Because either the magnitude threshold is set too high and many felt earthquakes are missed, or it is set too low and the majority of the notifications are related to unfelt earthquakes thereby only increasing anxiety among the population at each new update. Felt and damaging earthquakes are the ones that matter the most for the public (and authorities). They are the ones of societal importance even when of small magnitude. A smartphone application developed by EMSC (Euro-Med Seismological Centre) with the financial support of the Fondation MAIF aims at providing suitable notifications for earthquakes by collating different information threads covering tsunamigenic, potentially damaging and felt earthquakes. Tsunamigenic earthquakes are considered here to be those ones that are the subject of alert or information messages from the PTWC (Pacific Tsunami Warning Centre). While potentially damaging earthquakes are identified through an automated system called EQIA (Earthquake Qualitative Impact Assessment) developed and operated at EMSC. This rapidly assesses earthquake impact by comparing the population exposed to each expected

  17. Earthquake Prediction in Large-scale Faulting Experiments

    NASA Astrophysics Data System (ADS)

    Junger, J.; Kilgore, B.; Beeler, N.; Dieterich, J.

    2004-12-01

    We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquake predictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of

  18. Earthquakes, gravity, and the origin of the Bali Basin: An example of a Nascent Continental Fold-and-Thrust Belt

    NASA Astrophysics Data System (ADS)

    McCaffrey, Robert; Nabelek, John

    1987-01-01

    We infer from the bathymetry and gravity field and from the source mechanisms and depths of the eight largest earthquakes in the Bali region that the Bali Basin is a downwarp in the crust of the Sunda Shelf produced and maintained by thrusting along the Flores back arc thrust zone. Earthquake source mechanisms and focal depths are inferred from the inversion of long-period P and SH waves for all events and short-period P waves for two of the events. Centroidal depths that give the best fit to the seismograms range from 10 to 18 km, but uncertainties in depth allow a range from 7 to 24 km. The P wave nodal planes that dip south at 13° to 35° (±7°) strike roughly parallel to the volcanic arc and are consistent with thrusting of crust of the Bali Basin beneath it. The positions of the earthquakes with respect to crustal features inferred from seismic and gravity data suggest that the earthquakes occur in the basement along the western end of the Flores thrust zone. The slip direction for the back arc thrust zone inferred from the orientation of the earthquake slip vectors indicates that the thrusting in the Bali Basin is probably part of the overall plate convergence, as it roughly coincides with the convergence direction between the Sunda arc and the Indian Ocean plate. Summation of seismic moments of earthquakes between 1960 and 1985 suggests a minimum rate of convergence across the thrust zone of 4 ± 2 mm/a. The presence of back arc thrusting suggests that some coupling between the Indian Ocean plate and the Sunda arc occurs but mechanisms such as continental collision or a shallow subduction of the Indian Ocean plate probably can be ruled out. The present tectonic setting and structure of the Bali Basin is comparable to the early forelands of the Andes or western North America in that a fold-and-thrust belt is forming on the continental side of an arc-trench system at which oceanic lithosphere is being subducted. The Bali Basin is flanked by the Tertiary Java

  19. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.

  20. Building an 18 000-year-long paleo-earthquake record from detailed deep-sea turbidite characterisation in Poverty Bay, New Zealand

    NASA Astrophysics Data System (ADS)

    Pouderoux, H.; Lamarche, G.; Proust, J.-N.

    2012-06-01

    Two ~20 m-long sedimentary cores collected in two neighbouring mid-slope basins of the Paritu Turbidite System in Poverty Bay, east of New Zealand, show a high concentration of turbidites (5 to 6 turbidites per meter), interlaid with hemipelagites, tephras and a few debrites. Turbidites occur as both stacked and single, and exhibit a range of facies from muddy to sandy turbidites. The age of each turbidite is estimated using the statistical approach developed in the OxCal software from an exceptionally dense set of tephrochronology and radiocarbon ages (~1 age per meter). The age, together with the facies and the petrophysical properties of the sediment (density, magnetic susceptibility and P-wave velocity), allows the correlation of turbidites across the continental slope (1400-2300 m water depth). We identify 73 synchronous turbidites, named basin events, across the two cores between 819 ± 191 and 17 729 ± 701 yr BP. Compositional, foraminiferal and geochemical signatures of the turbidites are used to characterise the source area of the sediment, the origin of the turbidity currents, and their triggering mechanism. Sixty-seven basin events are interpreted as originated from slope failures on the upper continental slope in water depth ranging from 150 to 1200 m. Their earthquake trigger is inferred from the heavily gullied morphology of the source area and the water depth at which slope failures originated. We derive an earthquake mean return time of ~230 yr, with a 90% probability range from 10 to 570 yr. The earthquake chronology indicates cycles of progressive decrease of earthquake return times from ~400 yr to ~150 yr at 0-7 kyr, 8.2-13.5 kyr, 14.7-18 kyr. The two 1.2 kyr-long intervals in between (7-8.2 kyr and 13.5-14.7 kyr) correspond to basin-wide reorganisations with anomalous turbidite deposition (finer deposits and/or non deposition) reflecting the emplacement of two large mass transport deposits much more voluminous than the "classical" earthquake

  1. Integrated SeismoGeodetic Systsem with High-Resolution, Real-Time GNSS and Accelerometer Observation For Earthquake Early Warning Application.

    NASA Astrophysics Data System (ADS)

    Passmore, P. R.; Jackson, M.; Zimakov, L. G.; Raczka, J.; Davidson, P.

    2014-12-01

    The key requirements for Earthquake Early Warning and other Rapid Event Notification Systems are: Quick delivery of digital data from a field station to the acquisition and processing center; Data integrity for real-time earthquake notification in order to provide warning prior to significant ground shaking in the given target area. These two requirements are met in the recently developed Trimble SG160-09 SeismoGeodetic System, which integrates both GNSS and acceleration measurements using the Kalman filter algorithm to create a new high-rate (200 sps), real-time displacement with sufficient accuracy and very low latency for rapid delivery of the acquired data to a processing center. The data acquisition algorithm in the SG160-09 System provides output of both acceleration and displacement digital data with 0.2 sec delay. This is a significant reduction in the time interval required for real-time transmission compared to data delivery algorithms available in digitizers currently used in other Earthquake Early Warning networks. Both acceleration and displacement data are recorded and transmitted to the processing site in a specially developed Multiplexed Recording Format (MRF) that minimizes the bandwidth required for real-time data transmission. In addition, a built in algorithm calculates the τc and Pd once the event is declared. The SG160-09 System keeps track of what data has not been acknowledged and re-transmits the data giving priority to current data. Modified REF TEK Protocol Daemon (RTPD) receives the digital data and acknowledges data received without error. It forwards this "good" data to processing clients of various real-time data processing software including Earthworm and SeisComP3. The processing clients cache packets when a data gap occurs due to a dropped packet or network outage. The cache packet time is settable, but should not exceed 0.5 sec in the Earthquake Early Warning network configuration. The rapid data transmission algorithm was tested

  2. The study of key issues about integration of GNSS and strong-motion records for real-time earthquake monitoring

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai

    2016-08-01

    This paper has studied the key issues about integration of GNSS and strong-motion records for real-time earthquake monitoring. The validations show that the consistence of the coordinate system must be considered firstly to exclude the system bias between GNSS and strong-motion. The GNSS sampling rate is suggested about 1-5 Hz, and we should give the strong-motion's baseline shift with a larger dynamic noise as its variation is very swift. The initialization time of solving the baseline shift is less than one minute, and ambiguity resolution strategy is not greatly improved the solution. The data quality is very important for the solution, we advised to use multi-frequency and multi-system observations. These ideas give an important guide for real-time earthquake monitoring and early warning by the tight integration of GNSS and strong-motion records.

  3. A local earthquake coda magnitude and its relation to duration, moment M sub O, and local Richter magnitude M sub L

    NASA Technical Reports Server (NTRS)

    Suteau, A. M.; Whitcomb, J. H.

    1977-01-01

    A relationship was found between the seismic moment, M sub O, of shallow local earthquakes and the total duration of the signal, t, in seconds, measured from the earthquakes origin time, assuming that the end of the coda is composed of backscattering surface waves due to lateral heterogenity in the shallow crust following Aki. Using the linear relationship between the logarithm of M sub O and the local Richter magnitude M sub L, a relationship between M sub L and t, was found. This relationship was used to calculate a coda magnitude M sub C which was compared to M sub L for Southern California earthquakes which occurred during the period from 1972 to 1975.

  4. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone

  5. Insignificant solar-terrestrial triggering of earthquakes

    USGS Publications Warehouse

    Love, Jeffrey J.; Thomas, Jeremy N.

    2013-01-01

    We examine the claim that solar-terrestrial interaction, as measured by sunspots, solar wind velocity, and geomagnetic activity, might play a role in triggering earthquakes. We count the number of earthquakes having magnitudes that exceed chosen thresholds in calendar years, months, and days, and we order these counts by the corresponding rank of annual, monthly, and daily averages of the solar-terrestrial variables. We measure the statistical significance of the difference between the earthquake-number distributions below and above the median of the solar-terrestrial averages by χ2 and Student's t tests. Across a range of earthquake magnitude thresholds, we find no consistent and statistically significant distributional differences. We also introduce time lags between the solar-terrestrial variables and the number of earthquakes, but again no statistically significant distributional difference is found. We cannot reject the null hypothesis of no solar-terrestrial triggering of earthquakes.

  6. A new Bayesian Inference-based Phase Associator for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Meier, Men-Andrin; Heaton, Thomas; Clinton, John; Wiemer, Stefan

    2013-04-01

    whether the incoming waveforms are consistent with amplitude and frequency patterns of local earthquakes by means of a maximum likelihood approach. If such a single-station event likelihood is larger than a predefined threshold value we check whether there are neighboring stations that also have single-station event likelihoods above the threshold. If this is the case for at least one other station, we evaluate whether the respective relative arrival times are in agreement with a common earthquake origin (assuming a simple velocity model and using an Equal Differential Time location scheme). Additionally we check if there are stations where, given the preliminary location, observations would be expected but were not reported ("not-yet-arrived data"). Together, the single-station event likelihood functions and the location likelihood function constitute the multi-station event likelihood function. This function can then be combined with various types of prior information (such as station noise levels, preceding seismicity, fault proximity, etc.) to obtain a Bayesian posterior distribution, representing the degree of belief that the ensemble of the current real-time observations correspond to a local earthquake, rather than to some other signal source irrelevant for EEW. Additional to the reduction of the blind zone size, this approach facilitates the eventual development of an end-to-end probabilistic framework for an EEW system that provides systematic real-time assessment of the risk of false alerts, which enables end users of EEW to implement damage mitigation strategies only above a specified certainty level.

  7. Urban Landslides Induced by the 2004 Niigata-Chuetsu Earthquake

    NASA Astrophysics Data System (ADS)

    Kamai, T.; Trandafir, A. C.; Sidle, R. C.

    2005-05-01

    Landslides triggered by the Chuetsu earthquake occurred in artificial slopes of some new developments in suburban Nagaoka, the largest city in the affected area. The landslides occurred in hilly terrain of the eastern part of Nagaoka between the alluvial plain and Tertiary folded mountains of Yamakoshi. Although the extent of landslides in urban Nagaoka was small compared with landslides on natural slopes (especially near Yamakoshi), they represent an important case study for urban landslide disasters. Slope instabilities in urban residential areas were classified as: A) landslides in steep embankments; B) landslides in gently sloping artificial valley fills; C) re-activation of old landslides; and D) liquefaction in deep artificial valley fills. All these failures occurred in relatively uniform suburban landscapes, which were significantly modified from the original landforms. Recent destructive earthquakes in Japan caused similar types of slope failures in urban regions, suggesting that lessons from past earthquakes were not implemented. The greatest damage due to type-A failures occurred in the 25-yr old Takamachi residential area, where about 70 of 522 homes were judged to be uninhabitable. Before development, this area was an isolated hill (90 m elevation) with an adjacent terrace (60 m elevation) consisting of gravel, sand, and silt of the lower to middle Pleistocene deposits. Development earthworks removed the hill crest and created a wide plateau (70 m elevation); excavated soil was placed on the perimeter as an embankment. During the earthquake, the embankment slope collapsed, including retaining walls, perimeter road, and homes. The most serious damage occurred in five places around the margin of the plateau corresponding to shallow valley fills (5 to 8 m thick). Earthquake response analyses using an equivalent linear model indicated the amplification of seismic waves at the surface of embankment slopes, and the peak earthquake acceleration exceeded 1 G

  8. Cyclic migration of weak earthquakes between Lunigiana earthquake of October 10, 1995 and Reggio Emilia earthquake of October 15, 1996 (Northern Italy)

    NASA Astrophysics Data System (ADS)

    di Giovambattista, R.; Tyupkin, Yu

    The cyclic migration of weak earthquakes (M 2.2) which occurred during the yearprior to the October 15, 1996 (M = 4.9) Reggio Emilia earthquake isdiscussed in this paper. The onset of this migration was associated with theoccurrence of the October 10, 1995 (M = 4.8) Lunigiana earthquakeabout 90 km southwest from the epicenter of the Reggio Emiliaearthquake. At least three series of earthquakes migrating from theepicentral area of the Lunigiana earthquake in the northeast direction wereobserved. The migration of earthquakes of the first series terminated at adistance of about 30 km from the epicenter of the Reggio Emiliaearthquake. The earthquake migration of the other two series halted atabout 10 km from the Reggio Emilia epicenter. The average rate ofearthquake migration was about 200-300 km/year, while the time ofrecurrence of the observed cycles varied from 68 to 178 days. Weakearthquakes migrated along the transversal fault zones and sometimesjumped from one fault to another. A correlation between the migratingearthquakes and tidal variations is analysed. We discuss the hypothesis thatthe analyzed area is in a state of stress approaching the limit of thelong-term durability of crustal rocks and that the observed cyclic migrationis a result of a combination of a more or less regular evolution of tectonicand tidal variations.

  9. Earthquake Information System

    NASA Technical Reports Server (NTRS)

    1991-01-01

    IAEMIS (Integrated Automated Emergency Management Information System) is the principal tool of an earthquake preparedness program developed by Martin Marietta and the Mid-America Remote Sensing Center (MARC). It is a two-component set of software, data and procedures to provide information enabling management personnel to make informed decisions in disaster situations. The NASA-developed program ELAS, originally used to analyze Landsat data, provides MARC with a spatially-oriented information management system. Additional MARC projects include land resources management, and development of socioeconomic data.

  10. GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network

    NASA Astrophysics Data System (ADS)

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-09-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.

  11. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  12. Supercomputing meets seismology in earthquake exhibit

    ScienceCinema

    Blackwell, Matt; Rodger, Arthur; Kennedy, Tom

    2018-02-14

    When the California Academy of Sciences created the "Earthquake: Evidence of a Restless Planet" exhibit, they called on Lawrence Livermore to help combine seismic research with the latest data-driven visualization techniques. The outcome is a series of striking visualizations of earthquakes, tsunamis and tectonic plate evolution. Seismic-wave research is a core competency at Livermore. While most often associated with earthquakes, the research has many other applications of national interest, such as nuclear explosion monitoring, explosion forensics, energy exploration, and seismic acoustics. For the Academy effort, Livermore researchers simulated the San Andreas and Hayward fault events at high resolutions. Such calculations require significant computational resources. To simulate the 1906 earthquake, for instance, visualizing 125 seconds of ground motion required over 1 billion grid points, 10,000 time steps, and 7.5 hours of processor time on 2,048 cores of Livermore's Sierra machine.

  13. Mexican Seismic Alert System's SAS-I algorithm review considering strong earthquakes felt in Mexico City since 1985

    NASA Astrophysics Data System (ADS)

    Cuellar Martinez, A.; Espinosa Aranda, J.; Suarez, G.; Ibarrola Alvarez, G.; Ramos Perez, S.; Camarillo Barranco, L.

    2013-05-01

    The Seismic Alert System of Mexico (SASMEX) uses three algorithms for alert activation that involve the distance between the seismic sensing field station (FS) and the city to be alerted; and the forecast for earthquake early warning activation in the cities integrated to the system, for example in Mexico City, the earthquakes occurred with the highest accelerations, were originated in the Pacific Ocean coast, whose distance this seismic region and the city, favors the use of algorithm called Algorithm SAS-I. This algorithm, without significant changes since its beginning in 1991, employs the data that generate one or more FS during P wave detection until S wave detection plus a period equal to the time employed to detect these phases; that is the double S-P time, called 2*(S-P). In this interval, the algorithm performs an integration process of quadratic samples from FS which uses a triaxial accelerometer to get two parameters: amplitude and growth rate measured until 2*(S-P) time. The parameters in SAS-I are used in a Magnitude classifier model, which was made from Guerrero Coast earthquakes time series, with reference to Mb magnitude mainly. This algorithm activates a Public or Preventive Alert if the model predicts whether Strong or Moderate earthquake. The SAS-I algorithm has been operating for over 23 years in the subduction zone of the Pacific Coast of Mexico, initially in Guerrero and followed by Oaxaca; and since March 2012 in the seismic region of Pacific covering the coasts among Jalisco, Colima, Michoacan, Guerrero and Oaxaca, where this algorithm has issued 16 Public Alert and 62 Preventive Alerts to the Mexico City where its soil conditions increase damages by earthquake such as the occurred in September 1985. This work shows the review of the SAS-I algorithm and possible alerts that it could generate from major earthquakes recordings detected by FS or seismometers near the earthquakes, coming from Pacific Ocean Coast whose have been felt in Mexico

  14. The physics of an earthquake

    NASA Astrophysics Data System (ADS)

    McCloskey, John

    2008-03-01

    The Sumatra-Andaman earthquake of 26 December 2004 (Boxing Day 2004) and its tsunami will endure in our memories as one of the worst natural disasters of our time. For geophysicists, the scale of the devastation and the likelihood of another equally destructive earthquake set out a series of challenges of how we might use science not only to understand the earthquake and its aftermath but also to help in planning for future earthquakes in the region. In this article a brief account of these efforts is presented. Earthquake prediction is probably impossible, but earth scientists are now able to identify particularly dangerous places for future events by developing an understanding of the physics of stress interaction. Having identified such a dangerous area, a series of numerical Monte Carlo simulations is described which allow us to get an idea of what the most likely consequences of a future earthquake are by modelling the tsunami generated by lots of possible, individually unpredictable, future events. As this article was being written, another earthquake occurred in the region, which had many expected characteristics but was enigmatic in other ways. This has spawned a series of further theories which will contribute to our understanding of this extremely complex problem.

  15. Thermal emission before earthquakes by analyzing satellite infra-red data

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Taylor, P.; Bryant, N.; Pulinets, S.; Freund, F.

    2004-05-01

    Satellite thermal imaging data indicate long-lived thermal anomaly fields associated with large linear structures and fault systems in the Earth's crust but also with short-lived anomalies prior to major earthquakes. Positive anomalous land surface temperature excursions of the order of 3-4oC have been observed from NOAA/AVHRR, GOES/METEOSAT and EOS Terra/Aqua satellites prior to some major earthquake around the world. The rapid time-dependent evolution of the "thermal anomaly" suggests that is changing mid-IR emissivity from the earth. These short-lived "thermal anomalies", however, are very transient therefore there origin has yet to be determined. Their areal extent and temporal evolution may be dependent on geology, tectonic, focal mechanism, meteorological conditions and other factors.This work addresses the relationship between tectonic stress, electro-chemical and thermodynamic processes in the atmosphere and increasing mid-IR flux as part of a larger family of electromagnetic (EM) phenomena related to seismic activity.We still need to understand better the link between seismo-mechanical processes in the crust, on the surface, and at the earth-atmospheric interface that trigger thermal anomalies. This work serves as an introduction to our effort to find an answer to this question. We will present examples from the strong earthquakes that have occurred in the Americas during 2003/2004 and the techniques used to record the thermal emission mid-IR anomalies, geomagnetic and ionospheric variations that appear to associated with impending earthquake activity.

  16. An approach to detect afterslips in giant earthquakes in the normal-mode frequency band

    NASA Astrophysics Data System (ADS)

    Tanimoto, Toshiro; Ji, Chen; Igarashi, Mitsutsugu

    2012-08-01

    compatible with the GCMT solution but the low-frequency part requires afterslip to explain the increasing amplitude ratios towards lower frequency. The required slip has the moment about 19 per cent of the GCMT solution and the rise time of 260 s. The total moment of these earthquakes are 5.31 × 1022 N m (Tohoku), (1.86-1.96) × 1022 N m (Chile), 1.33 × 1023 N m (Sumatra) and 1.86 × 1021 N m (Solomon). The moment magnitudes are 9.08, 8.78-8.79, 9.35 and 8.11, respectively, using Kanamori's original formula between the moment and the moment magnitude. However, the trade-off problem between the moment and dip angle can modify these estimates for moment up to about 40-50 per cent and the corresponding magnitude ±0.1.

  17. Earthquake Fingerprints: Representing Earthquake Waveforms for Similarity-Based Detection

    NASA Astrophysics Data System (ADS)

    Bergen, K.; Beroza, G. C.

    2016-12-01

    New earthquake detection methods, such as Fingerprint and Similarity Thresholding (FAST), use fast approximate similarity search to identify similar waveforms in long-duration data without templates (Yoon et al. 2015). These methods have two key components: fingerprint extraction and an efficient search algorithm. Fingerprint extraction converts waveforms into fingerprints, compact signatures that represent short-duration waveforms for identification and search. Earthquakes are detected using an efficient indexing and search scheme, such as locality-sensitive hashing, that identifies similar waveforms in a fingerprint database. The quality of the search results, and thus the earthquake detection results, is strongly dependent on the fingerprinting scheme. Fingerprint extraction should map similar earthquake waveforms to similar waveform fingerprints to ensure a high detection rate, even under additive noise and small distortions. Additionally, fingerprints corresponding to noise intervals should have mutually dissimilar fingerprints to minimize false detections. In this work, we compare the performance of multiple fingerprint extraction approaches for the earthquake waveform similarity search problem. We apply existing audio fingerprinting (used in content-based audio identification systems) and time series indexing techniques and present modified versions that are specifically adapted for seismic data. We also explore data-driven fingerprinting approaches that can take advantage of labeled or unlabeled waveform data. For each fingerprinting approach we measure its ability to identify similar waveforms in a low signal-to-noise setting, and quantify the trade-off between true and false detection rates in the presence of persistent noise sources. We compare the performance using known event waveforms from eight independent stations in the Northern California Seismic Network.

  18. Real-Time GPS Monitoring for Earthquake Rapid Assessment in the San Francisco Bay Area

    NASA Astrophysics Data System (ADS)

    Guillemot, C.; Langbein, J. O.; Murray, J. R.

    2012-12-01

    The U.S. Geological Survey Earthquake Science Center has deployed a network of eight real-time Global Positioning System (GPS) stations in the San Francisco Bay area and is implementing software applications to continuously evaluate the status of the deformation within the network. Real-time monitoring of the station positions is expected to provide valuable information for rapidly estimating source parameters should a large earthquake occur in the San Francisco Bay area. Because earthquake response applications require robust data access, as a first step we have developed a suite of web-based applications which are now routinely used to monitor the network's operational status and data streaming performance. The web tools provide continuously updated displays of important telemetry parameters such as data latency and receive rates, as well as source voltage and temperature information within each instrument enclosure. Automated software on the backend uses the streaming performance data to mitigate the impact of outages, radio interference and bandwidth congestion on deformation monitoring operations. A separate set of software applications manages the recovery of lost data due to faulty communication links. Displacement estimates are computed in real-time for various combinations of USGS, Plate Boundary Observatory (PBO) and Bay Area Regional Deformation (BARD) network stations. We are currently comparing results from two software packages (one commercial and one open-source) used to process 1-Hz data on the fly and produce estimates of differential positions. The continuous monitoring of telemetry makes it possible to tune the network to minimize the impact of transient interruptions of the data flow, from one or more stations, on the estimated positions. Ongoing work is focused on using data streaming performance history to optimize the quality of the position, reduce drift and outliers by switching to the best set of stations within the network, and

  19. MOMENT TENSOR SOLUTIONS OF RECENT EARTHQUAKES IN THE CALABRIAN REGION (SOUTH ITALY)

    NASA Astrophysics Data System (ADS)

    Orecchio, B.; D'Amico, S.; Gervasi, A.; Guerra, I.; Presti, D.; Zhu, L.; Herrmann, R. B.; Neri, G.

    2009-12-01

    The aim of this study is to provide moment tensor solutions for recent events occurred in the Calabrian region (South Italy), an area struck by several destructive earthquakes in the last centuries. The seismicity of the area under investigation is actually characterized by low to moderate magnitude earthquakes (up to 4.5) not properly represented in the Italian national catalogues of focal mechanisms like RCMT (Regional Centroid Moment Tensor, Pondrelli et al., PEPI, 2006) and TDMT (Time Domain Moment Tensors, Dreger and Helmerger, BSSA, 1993). Also, the solutions estimated from P-onset polarities are often poorly constrained due to network geometry in the study area. We computed the moment tensor solutions using the “Cut And Paste” method originally proposed by Zhao and Helmerger (BSSA, 1994) and later modified by Zhu and Helmerger (BSSA, 1996). Each waveform is broken into the Pnl and surface wave segments and the source depth and focal mechanisms are determined using a grid search technique. The technique allows time shifts between synthetics and observed data in order to reduce dependence of the solution on the assumed velocity model and earthquake locations. This method has shown to provide good-quality solutions for earthquakes of magnitude as small as 2.5. The data set of the present study consists of waveforms from more than 100 earthquakes that were recorded by the permanent seismic network run by Istituto Nazionale di Geofisica e Vulcanologia (INGV) and about 40 stations of the NSF CAT/SCAN project. The results concur to check and better detail the regional geodynamic model assuming subduction of the Ionian lithosphere beneath the Tyrrhenian one and related response of the shallow structures in terms of normal and strike-slip faulting seismicity.

  20. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  1. U.S. Geological Survey (USGS) Earthquake Web Applications

    NASA Astrophysics Data System (ADS)

    Fee, J.; Martinez, E.

    2015-12-01

    USGS Earthquake web applications provide access to earthquake information from USGS and other Advanced National Seismic System (ANSS) contributors. One of the primary goals of these applications is to provide a consistent experience for accessing both near-real time information as soon as it is available and historic information after it is thoroughly reviewed. Millions of people use these applications every month including people who feel an earthquake, emergency responders looking for the latest information about a recent event, and scientists researching historic earthquakes and their effects. Information from multiple catalogs and contributors is combined by the ANSS Comprehensive Catalog into one composite catalog, identifying the most preferred information from any source for each event. A web service and near-real time feeds provide access to all contributed data, and are used by a number of users and software packages. The Latest Earthquakes application displays summaries of many events, either near-real time feeds or custom searches, and the Event Page application shows detailed information for each event. Because all data is accessed through the web service, it can also be downloaded by users. The applications are maintained as open source projects on github, and use mobile-first and responsive-web-design approaches to work well on both mobile devices and desktop computers. http://earthquake.usgs.gov/earthquakes/map/

  2. The Impact of Frictional Healing on Stick-Slip Recurrence Interval and Stress Drop: Implications for Earthquake Scaling

    NASA Astrophysics Data System (ADS)

    Im, Kyungjae; Elsworth, Derek; Marone, Chris; Leeman, John

    2017-12-01

    Interseismic frictional healing is an essential process in the seismic cycle. Observations of both natural and laboratory earthquakes demonstrate that the magnitude of stress drop scales with the logarithm of recurrence time, which is a cornerstone of the rate and state friction (RSF) laws. However, the origin of this log linear behavior and short time "cutoff" for small recurrence intervals remains poorly understood. Here we use RSF laws to demonstrate that the back-projected time of null-healing intrinsically scales with the initial frictional state θi. We explore this behavior and its implications for (1) the short-term cutoff time of frictional healing and (2) the connection between healing rates derived from stick-slip sliding versus slide-hold-slide tests. We use a novel, continuous solution of RSF for a one-dimensional spring-slider system with inertia. The numerical solution continuously traces frictional state evolution (and healing) and shows that stick-slip cutoff time also scales with frictional state at the conclusion of the dynamic slip process θi (=Dc/Vpeak). This numerical investigation on the origins of stick-slip response is verified by comparing laboratory data for a range of peak slip velocities. Slower slip motions yield lesser magnitude of friction drop at a given time due to higher frictional state at the end of each slip event. Our results provide insight on the origin of log linear stick-slip evolution and suggest an approach to estimating the critical slip distance on faults that exhibit gradual accelerations, such as for slow earthquakes.

  3. Analysis of the seismicity preceding large earthquakes

    NASA Astrophysics Data System (ADS)

    Stallone, Angela; Marzocchi, Warner

    2017-04-01

    The most common earthquake forecasting models assume that the magnitude of the next earthquake is independent from the past. This feature is probably one of the most severe limitations of the capability to forecast large earthquakes. In this work, we investigate empirically on this specific aspect, exploring whether variations in seismicity in the space-time-magnitude domain encode some information on the size of the future earthquakes. For this purpose, and to verify the stability of the findings, we consider seismic catalogs covering quite different space-time-magnitude windows, such as the Alto Tiberina Near Fault Observatory (TABOO) catalogue, the California and Japanese seismic catalog. Our method is inspired by the statistical methodology proposed by Baiesi & Paczuski (2004) and elaborated by Zaliapin et al. (2008) to distinguish between triggered and background earthquakes, based on a pairwise nearest-neighbor metric defined by properly rescaled temporal and spatial distances. We generalize the method to a metric based on the k-nearest-neighbors that allows us to consider the overall space-time-magnitude distribution of k-earthquakes, which are the strongly correlated ancestors of a target event. Finally, we analyze the statistical properties of the clusters composed by the target event and its k-nearest-neighbors. In essence, the main goal of this study is to verify if different classes of target event magnitudes are characterized by distinctive "k-foreshocks" distributions. The final step is to show how the findings of this work may (or not) improve the skill of existing earthquake forecasting models.

  4. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  5. ELER software - a new tool for urban earthquake loss assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Erdik, M.

    2010-12-01

    Rapid loss estimation after potentially damaging earthquakes is critical for effective emergency response and public information. A methodology and software package, ELER-Earthquake Loss Estimation Routine, for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region was developed under the Joint Research Activity-3 (JRA3) of the EC FP6 Project entitled "Network of Research Infrastructures for European Seismology-NERIES". Recently, a new version (v2.0) of ELER software has been released. The multi-level methodology developed is capable of incorporating regional variability and uncertainty originating from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. Although primarily intended for quasi real-time estimation of earthquake shaking and losses, the routine is also equally capable of incorporating scenario-based earthquake loss assessments. This paper introduces the urban earthquake loss assessment module (Level 2) of the ELER software which makes use of the most detailed inventory databases of physical and social elements at risk in combination with the analytical vulnerability relationships and building damage-related casualty vulnerability models for the estimation of building damage and casualty distributions, respectively. Spectral capacity-based loss assessment methodology and its vital components are presented. The analysis methods of the Level 2 module, i.e. Capacity Spectrum Method (ATC-40, 1996), Modified Acceleration-Displacement Response Spectrum Method (FEMA 440, 2005), Reduction Factor Method (Fajfar, 2000) and Coefficient Method (ASCE 41-06, 2006), are applied to the selected building types for validation and verification purposes. The damage estimates are compared to the results obtained from the other studies available in the literature, i.e. SELENA v4.0 (Molina et al., 2008) and

  6. Increased Earthquake Rates in the Central and Eastern US Portend Higher Earthquake Hazards

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Rubinstein, J. L.; Ellsworth, W. L.; Mueller, C. S.; Michael, A. J.; McGarr, A.; Petersen, M. D.; Weingarten, M.; Holland, A. A.

    2014-12-01

    Since 2009 the central and eastern United States has experienced an unprecedented increase in the rate of M≥3 earthquakes that is unlikely to be due to natural variation. Where the rates have increased so has the seismic hazard, making it important to understand these changes. Areas with significant seismicity increases are limited to areas where oil and gas production take place. By far the largest contributor to the seismicity increase is Oklahoma, where recent studies suggest that these rate changes may be due to fluid injection (e.g., Keranen et al., Geology, 2013; Science, 2014). Moreover, the area of increased seismicity in northern Oklahoma that began in 2013 coincides with the Mississippi Lime play, where well completions greatly increased the year before the seismicity increase. This suggests a link to oil and gas production either directly or from the disposal of significant amounts of produced water within the play. For the purpose of assessing the hazard due to these earthquakes, should they be treated differently from natural earthquakes? Previous studies suggest that induced seismicity may differ from natural seismicity in clustering characteristics or frequency-magnitude distributions (e.g., Bachmann et al., GJI, 2011; Llenos and Michael, BSSA, 2013). These differences could affect time-independent hazard computations, which typically assume that clustering and size distribution remain constant. In Oklahoma, as well as other areas of suspected induced seismicity, we find that earthquakes since 2009 tend to be considerably more clustered in space and time than before 2009. However differences between various regional and national catalogs leave unclear whether there are significant changes in magnitude distribution. Whether they are due to natural or industrial causes, the increased earthquake rates in these areas could increase the hazard in ways that are not accounted for in current hazard assessment practice. Clearly the possibility of induced

  7. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes

  8. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  9. Foreshocks and aftershocks of Pisagua 2014 earthquake: time and space evolution of megathrust event.

    NASA Astrophysics Data System (ADS)

    Fuenzalida Velasco, Amaya; Rietbrock, Andreas; Wollam, Jack; Thomas, Reece; de Lima Neto, Oscar; Tavera, Hernando; Garth, Thomas; Ruiz, Sergio

    2016-04-01

    The 2014 Pisagua earthquake of magnitude 8.2 is the first case in Chile where a foreshock sequence was clearly recorded by a local network, as well all the complete sequence including the mainshock and its aftershocks. The seismicity of the last year before the mainshock include numerous clusters close to the epicentral zone (Ruiz et al; 2014) but it was on 16th March that this activity became stronger with the Mw 6.7 precursory event taking place in front of Iquique coast at 12 km depth. The Pisagua earthquake arrived on 1st April 2015 breaking almost 120 km N-S and two days after a 7.6 aftershock occurred in the south of the rupture, enlarging the zone affected by this sequence. In this work, we analyse the foreshocks and aftershock sequence of Pisagua earthquake, from the spatial and time evolution for a total of 15.764 events that were recorded from the 1st March to 31th May 2015. This event catalogue was obtained from the automatic analyse of seismic raw data of more than 50 stations installed in the north of Chile and the south of Peru. We used the STA/LTA algorithm for the detection of P and S arrival times on the vertical components and then a method of back propagation in a 1D velocity model for the event association and preliminary location of its hypocenters following the algorithm outlined by Rietbrock et al. (2012). These results were then improved by locating with NonLinLoc software using a regional velocity model. We selected the larger events to analyse its moment tensor solution by a full waveform inversion using ISOLA software. In order to understand the process of nucleation and propagation of the Pisagua earthquake, we also analysed the evolution in time of the seismicity of the three months of data. The zone where the precursory events took place was strongly activated two weeks before the mainshock and remained very active until the end of the analysed period with an important quantity of the seismicity located in the upper plate and having

  10. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2004

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Estes, Steve; Prejean, Stephanie; Sanchez, John J.; Sanches, Rebecca; McNutt, Stephen R.; Paskievitch, John

    2005-01-01

    October; (5) an earthquake swarm at Akutan in July; and (6) low-level tremor at Okmok Caldera throughout the year (Table 2). Instrumentation and data acquisition highlights in 2004 were the installation of subnetworks on Mount Peulik and Korovin Volcano and the installation of broadband stations to augment the Katmai and Spurr subnetworks.This catalog includes: (1) a description of instruments deployed in the field and their locations; (2) a description of earthquake detection, recording, analysis, and data archival systems; (3) a description of velocity models used for earthquake locations; (4) a summary of earthquakes located in 2004; and (5) an accompanying UNIX tar-file with a summary of earthquake origin times, hypocenters, magnitudes, phase arrival times, and location quality statistics; daily station usage statistics; and all HYPOELLIPSE files used to determine the earthquake locations in 2004.

  11. Extension of the energy-to-moment parameter Θ to intermediate and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Saloor, Nooshin; Okal, Emile A.

    2018-01-01

    We extend to intermediate and deep earthquakes the slowness parameter Θ originally introduced by Newman and Okal (1998). Because of the increasing time lag with depth between the phases P, pP and sP, and of variations in anelastic attenuation parameters t∗ , we define four depth bins featuring slightly different algorithms for the computation of Θ . We apply this methodology to a global dataset of 598 intermediate and deep earthquakes with moments greater than 1025 dyn∗cm. We find a slight increase with depth in average values of Θ (from -4.81 between 80 and 135 km to -4.48 between 450 and 700 km), which however all have intersecting one- σ bands. With widths ranging from 0.26 to 0.31 logarithmic units, these are narrower than their counterpart for a reference dataset of 146 shallow earthquakes (σ = 0.55). Similarly, we find no correlation between values of Θ and focal geometry. These results point to stress conditions within the seismogenic zones inside the Wadati-Benioff slabs more homogeneous than those prevailing at the shallow contacts between tectonic plates.

  12. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  13. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  14. The Alaska earthquake, March 27, 1964: lessons and conclusions

    USGS Publications Warehouse

    Eckel, Edwin B.

    1970-01-01

    One of the greatest earthquakes of all time struck south-central Alaska on March 27, 1964. Strong motion lasted longer than for most recorded earthquakes, and more land surface was dislocated, vertically and horizontally, than by any known previous temblor. Never before were so many effects on earth processes and on the works of man available for study by scientists and engineers over so great an area. The seismic vibrations, which directly or indirectly caused most of the damage, were but surface manifestations of a great geologic event-the dislocation of a huge segment of the crust along a deeply buried fault whose nature and even exact location are still subjects for speculation. Not only was the land surface tilted by the great tectonic event beneath it, with resultant seismic sea waves that traversed the entire Pacific, but an enormous mass of land and sea floor moved several tens of feet horizontally toward the Gulf of Alaska. Downslope mass movements of rock, earth, and snow were initiated. Subaqueous slides along lake shores and seacoasts, near-horizontal movements of mobilized soil (“landspreading”), and giant translatory slides in sensitive clay did the most damage and provided the most new knowledge as to the origin, mechanics, and possible means of control or avoidance of such movements. The slopes of most of the deltas that slid in 1964, and that produced destructive local waves, are still as steep or steeper than they were before the earthquake and hence would be unstable or metastable in the event of another great earthquake. Rockslide avalanches provided new evidence that such masses may travel on cushions of compressed air, but a widely held theory that glaciers surge after an earthquake has not been substantiated. Innumerable ground fissures, many of them marked by copious emissions of water, caused much damage in towns and along transportation routes. Vibration also consolidated loose granular materials. In some coastal areas, local

  15. Using a modified time-reverse imaging technique to locate low-frequency earthquakes on the San Andreas Fault near Cholame, California

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2015-01-01

    We present a new method to locate low-frequency earthquakes (LFEs) within tectonic tremor episodes based on time-reverse imaging techniques. The modified time-reverse imaging technique presented here is the first method that locates individual LFEs within tremor episodes within 5 km uncertainty without relying on high-amplitude P-wave arrivals and that produces similar hypocentral locations to methods that locate events by stacking hundreds of LFEs without having to assume event co-location. In contrast to classic time-reverse imaging algorithms, we implement a modification to the method that searches for phase coherence over a short time period rather than identifying the maximum amplitude of a superpositioned wavefield. The method is independent of amplitude and can help constrain event origin time. The method uses individual LFE origin times, but does not rely on a priori information on LFE templates and families.We apply the method to locate 34 individual LFEs within tremor episodes that occur between 2010 and 2011 on the San Andreas Fault, near Cholame, California. Individual LFE location accuracies range from 2.6 to 5 km horizontally and 4.8 km vertically. Other methods that have been able to locate individual LFEs with accuracy of less than 5 km have mainly used large-amplitude events where a P-phase arrival can be identified. The method described here has the potential to locate a larger number of individual low-amplitude events with only the S-phase arrival. Location accuracy is controlled by the velocity model resolution and the wavelength of the dominant energy of the signal. Location results are also dependent on the number of stations used and are negligibly correlated with other factors such as the maximum gap in azimuthal coverage, source–station distance and signal-to-noise ratio.

  16. Emergency medical rescue efforts after a major earthquake: lessons from the 2008 Wenchuan earthquake.

    PubMed

    Zhang, Lulu; Liu, Xu; Li, Youping; Liu, Yuan; Liu, Zhipeng; Lin, Juncong; Shen, Ji; Tang, Xuefeng; Zhang, Yi; Liang, Wannian

    2012-03-03

    Major earthquakes often result in incalculable environmental damage, loss of life, and threats to health. Tremendous progress has been made in response to many medical challenges resulting from earthquakes. However, emergency medical rescue is complicated, and great emphasis should be placed on its organisation to achieve the best results. The 2008 Wenchuan earthquake was one of the most devastating disasters in the past 10 years and caused more than 370,000 casualties. The lessons learnt from the medical disaster relief effort and the subsequent knowledge gained about the regulation and capabilities of medical and military back-up teams should be widely disseminated. In this Review we summarise and analyse the emergency medical rescue efforts after the Wenchuan earthquake. Establishment of a national disaster medical response system, an active and effective commanding system, successful coordination between rescue forces and government agencies, effective treatment, a moderate, timely and correct public health response, and long-term psychological support are all crucial to reduce mortality and morbidity and promote overall effectiveness of rescue efforts after a major earthquake. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. An efficient repeating signal detector to investigate earthquake swarms

    NASA Astrophysics Data System (ADS)

    Skoumal, Robert J.; Brudzinski, Michael R.; Currie, Brian S.

    2016-08-01

    Repetitive earthquake swarms have been recognized as key signatures in fluid injection induced seismicity, precursors to volcanic eruptions, and slow slip events preceding megathrust earthquakes. We investigate earthquake swarms by developing a Repeating Signal Detector (RSD), a computationally efficient algorithm utilizing agglomerative clustering to identify similar waveforms buried in years of seismic recordings using a single seismometer. Instead of relying on existing earthquake catalogs of larger earthquakes, RSD identifies characteristic repetitive waveforms by rapidly identifying signals of interest above a low signal-to-noise ratio and then grouping based on spectral and time domain characteristics, resulting in dramatically shorter processing time than more exhaustive autocorrelation approaches. We investigate seismicity in four regions using RSD: (1) volcanic seismicity at Mammoth Mountain, California, (2) subduction-related seismicity in Oaxaca, Mexico, (3) induced seismicity in Central Alberta, Canada, and (4) induced seismicity in Harrison County, Ohio. In each case, RSD detects a similar or larger number of earthquakes than existing catalogs created using more time intensive methods. In Harrison County, RSD identifies 18 seismic sequences that correlate temporally and spatially to separate hydraulic fracturing operations, 15 of which were previously unreported. RSD utilizes a single seismometer for earthquake detection which enables seismicity to be quickly identified in poorly instrumented regions at the expense of relying on another method to locate the new detections. Due to the smaller computation overhead and success at distances up to ~50 km, RSD is well suited for real-time detection of low-magnitude earthquake swarms with permanent regional networks.

  18. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1, 2000 through December 31, 2001

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Tytgat, Guy; Estes, Steve; Moran, Seth C.; Paskievitch, John; McNutt, Stephen R.

    2002-01-01

    The Alaska Volcano Observatory (AVO), a cooperative program of the U.S. Geological Survey, the Geophysical Institute of the University of Alaska Fairbanks, and the Alaska Division of Geological and Geophysical Surveys, has maintained seismic monitoring networks at potentially active volcanoes in Alaska since 1988 (Power and others, 1993; Jolly and others, 1996; Jolly and others, 2001). The primary objectives of this program are the seismic surveillance of active, potentially hazardous, Alaskan volcanoes and the investigation of seismic processes associated with active volcanism. This catalog reflects the status and evolution of the seismic monitoring program, and presents the basic seismic data for the time period January 1, 2000, through December 31, 2001. For an interpretation of these data and previously recorded data, the reader should refer to several recent articles on volcano related seismicity on Alaskan volcanoes in Appendix G.The AVO seismic network was used to monitor twenty-three volcanoes in real time in 2000-2001. These include Mount Wrangell, Mount Spurr, Redoubt Volcano, Iliamna Volcano, Augustine Volcano, Katmai Volcanic Group (Snowy Mountain, Mount Griggs, Mount Katmai, Novarupta, Trident Volcano, Mount Mageik, Mount Martin), Aniakchak Crater, Pavlof Volcano, Mount Dutton, Isanotski Peaks, Shishaldin Volcano, Fisher Caldera, Westdahl Peak, Akutan Peak, Makushin Volcano, Great Sitkin Volcano, and Kanaga Volcano (Figure 1). AVO located 1551 and 1428 earthquakes in 2000 and 2001, respectively, on and around these volcanoes.Highlights of the catalog period (Table 1) include: volcanogenic seismic swarms at Shishaldin Volcano between January and February 2000 and between May and June 2000; an eruption at Mount Cleveland between February and May 2001; episodes of possible tremor at Makushin Volcano starting March 2001 and continuing through 2001, and two earthquake swarms at Great Sitkin Volcano in 2001.This catalog includes: (1) earthquake origin times

  19. Post-Earthquake Reconstruction — in Context of Housing

    NASA Astrophysics Data System (ADS)

    Sarkar, Raju

    Comprehensive rescue and relief operations are always launched with no loss of time with active participation of the Army, Governmental agencies, Donor agencies, NGOs, and other Voluntary organizations after each Natural Disaster. There are several natural disasters occurring throughout the world round the year and one of them is Earthquake. More than any other natural catastrophe, an earthquake represents the undoing of our most basic pre-conceptions of the earth as the source of stability or the first distressing factor due to earthquake is the collapse of our dwelling units. Earthquake has affected buildings since people began constructing them. So after each earthquake a reconstruction of housing program is very much essential since housing is referred to as shelter satisfying one of the so-called basic needs next to food and clothing. It is a well-known fact that resettlement (after an earthquake) is often accompanied by the creation of ghettos and ensuing problems in the provision of infrastructure and employment. In fact a housing project after Bhuj earthquake in Gujarat, India, illustrates all the negative aspects of resettlement in the context of reconstruction. The main theme of this paper is to consider few issues associated with post-earthquake reconstruction in context of housing, all of which are significant to communities that have had to rebuild after catastrophe or that will face such a need in the future. Few of them are as follows: (1) Why rebuilding opportunities are time consuming? (2) What are the causes of failure in post-earthquake resettlement? (3) How can holistic planning after an earthquake be planned? (4) What are the criteria to be checked for sustainable building materials? (5) What are the criteria for success in post-earthquake resettlement? (6) How mitigation in post-earthquake housing can be made using appropriate repair, restoration, and strengthening concepts?

  20. Regional and Local Glacial-Earthquake Patterns in Greenland

    NASA Astrophysics Data System (ADS)

    Olsen, K.; Nettles, M.

    2016-12-01

    Icebergs calved from marine-terminating glaciers currently account for up to half of the 400 Gt of ice lost annually from the Greenland ice sheet (Enderlin et al., 2014). When large capsizing icebergs ( 1 Gt of ice) calve, they produce elastic waves that propagate through the solid earth and are observed as teleseismically detectable MSW 5 glacial earthquakes (e.g., Ekström et al., 2003; Nettles & Ekström, 2010 Tsai & Ekström, 2007; Veitch & Nettles, 2012). The annual number of these events has increased dramatically over the past two decades. We analyze glacial earthquakes from 2011-2013, which expands the glacial-earthquake catalog by 50%. The number of glacial-earthquake solutions now available allows us to investigate regional patterns across Greenland and link earthquake characteristics to changes in ice dynamics at individual glaciers. During the years of our study Greenland's west coast dominated glacial-earthquake production. Kong Oscar Glacier, Upernavik Isstrøm, and Jakobshavn Isbræ all produced more glacial earthquakes during this time than in preceding years. We link patterns in glacial-earthquake production and cessation to the presence or absence of floating ice tongues at glaciers on both coasts of Greenland. The calving model predicts glacial-earthquake force azimuths oriented perpendicular to the calving front, and comparisons between seismic data and satellite imagery confirm this in most instances. At two glaciers we document force azimuths that have recently changed orientation and confirm that similar changes have occurred in the calving-front geometry. We also document glacial earthquakes at one previously quiescent glacier. Consistent with previous work, we model the glacial-earthquake force-time function as a boxcar with horizontal and vertical force components that vary synchronously. We investigate limitations of this approach and explore improvements that could lead to a more accurate representation of the glacial earthquake source.

  1. Real-Time In-Situ Measurements for Earthquake Early Warning and Space-Borne Deformation Measurement Mission Support

    NASA Astrophysics Data System (ADS)

    Kedar, S.; Bock, Y.; Webb, F.; Clayton, R. W.; Owen, S. E.; Moore, A. W.; Yu, E.; Dong, D.; Fang, P.; Jamason, P.; Squibb, M. B.; Crowell, B. W.

    2010-12-01

    In situ geodetic networks for observing crustal motion have proliferated over the last two decades and are now recognized as indispensable tools in geophysical research, along side more traditional seismic networks. The 2007 National Research Council’s Decadal Survey recognizes that space-borne and in situ observations, such as Interferometric Synthetic Aperture Radar (InSAR) and ground-based continuous GPS (CGPS) are complementary in forecasting, in assessing, and in mitigating natural hazards. However, the information content and timeliness of in situ geodetic observations have not been fully exploited, particularly at higher frequencies than traditional daily CGPS position time series. Nor have scientists taken full advantage of the complementary natures of geodetic and seismic data, as well as those of space-based and in situ observations. To address these deficits we are developing real-time CGPS data products for earthquake early warning and for space-borne deformation measurement mission support. Our primary mission objective is in situ verification and validation for DESDynI, but our work is also applicable to other international missions (Sentinel 1a/1b, SAOCOM, ALOS 2). Our project is developing new capabilities to continuously observe and mitigate earthquake-related hazards (direct seismic damage, tsunamis, landslides, volcanoes) in near real-time with high spatial-temporal resolution, to improve the planning and accuracy of space-borne observations. We also are using GPS estimates of tropospheric zenith delay combined with water vapor data from weather models to generate tropospheric calibration maps for mitigating the largest source of error, atmospheric artifacts, in InSAR interferograms. These functions will be fully integrated into a Geophysical Resource Web Services and interactive GPS Explorer data portal environment being developed as part of an ongoing MEaSUREs project and NASA’s contribution to the EarthScope project. GPS Explorer

  2. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  3. Rapid and Near Real-Time Assessments of Population Displacement Using Mobile Phone Data Following Disasters: The 2015 Nepal Earthquake.

    PubMed

    Wilson, Robin; Zu Erbach-Schoenberg, Elisabeth; Albert, Maximilian; Power, Daniel; Tudge, Simon; Gonzalez, Miguel; Guthrie, Sam; Chamberlain, Heather; Brooks, Christopher; Hughes, Christopher; Pitonakova, Lenka; Buckee, Caroline; Lu, Xin; Wetter, Erik; Tatem, Andrew; Bengtsson, Linus

    2016-02-24

    Sudden impact disasters often result in the displacement of large numbers of people. These movements can occur prior to events, due to early warning messages, or take place post-event due to damages to shelters and livelihoods as well as a result of long-term reconstruction efforts. Displaced populations are especially vulnerable and often in need of support. However, timely and accurate data on the numbers and destinations of displaced populations are extremely challenging to collect across temporal and spatial scales, especially in the aftermath of disasters. Mobile phone call detail records were shown to be a valid data source for estimates of population movements after the 2010 Haiti earthquake, but their potential to provide near real-time ongoing measurements of population displacements immediately after a natural disaster has not been demonstrated. A computational architecture and analytical capacity were rapidly deployed within nine days of the Nepal earthquake of 25th April 2015, to provide spatiotemporally detailed estimates of population displacements from call detail records based on movements of 12 million de-identified mobile phones users. Analysis shows the evolution of population mobility patterns after the earthquake and the patterns of return to affected areas, at a high level of detail. Particularly notable is the movement of an estimated 390,000 people above normal from the Kathmandu valley after the earthquake, with most people moving to surrounding areas and the highly-populated areas in the central southern area of Nepal. This analysis provides an unprecedented level of information about human movement after a natural disaster, provided within a very short timeframe after the earthquake occurred. The patterns revealed using this method are almost impossible to find through other methods, and are of great interest to humanitarian agencies.

  4. A post-Tohoku earthquake review of earthquake probabilities in the Southern Kanto District, Japan

    NASA Astrophysics Data System (ADS)

    Somerville, Paul G.

    2014-12-01

    The 2011 Mw 9.0 Tohoku earthquake generated an aftershock sequence that affected a large part of northern Honshu, and has given rise to widely divergent forecasts of changes in earthquake occurrence probabilities in northern Honshu. The objective of this review is to assess these forecasts as they relate to potential changes in the occurrence probabilities of damaging earthquakes in the Kanto Region. It is generally agreed that the 2011 Mw 9.0 Tohoku earthquake increased the stress on faults in the southern Kanto district. Toda and Stein (Geophys Res Lett 686, 40: doi:10.1002, 2013) further conclude that the probability of earthquakes in the Kanto Corridor has increased by a factor of 2.5 for the time period 11 March 2013 to 10 March 2018 in the Kanto Corridor. Estimates of earthquake probabilities in a wider region of the Southern Kanto District by Nanjo et al. (Geophys J Int, doi:10.1093, 2013) indicate that any increase in the probability of earthquakes is insignificant in this larger region. Uchida et al. (Earth Planet Sci Lett 374: 81-91, 2013) conclude that the Philippine Sea plate the extends well north of the northern margin of Tokyo Bay, inconsistent with the Kanto Fragment hypothesis of Toda et al. (Nat Geosci, 1:1-6,2008), which attributes deep earthquakes in this region, which they term the Kanto Corridor, to a broken fragment of the Pacific plate. The results of Uchida and Matsuzawa (J Geophys Res 115:B07309, 2013)support the conclusion that fault creep in southern Kanto may be slowly relaxing the stress increase caused by the Tohoku earthquake without causing more large earthquakes. Stress transfer calculations indicate a large stress transfer to the Off Boso Segment as a result of the 2011 Tohoku earthquake. However, Ozawa et al. (J Geophys Res 117:B07404, 2012) used onshore GPS measurements to infer large post-Tohoku creep on the plate interface in the Off-Boso region, and Uchida and Matsuzawa (ibid.) measured similar large creep off the Boso

  5. Possible seasonality in large deep-focus earthquakes

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Shearer, Peter M.

    2015-09-01

    Large deep-focus earthquakes (magnitude > 7.0, depth > 500 km) have exhibited strong seasonality in their occurrence times since the beginning of global earthquake catalogs. Of 60 such events from 1900 to the present, 42 have occurred in the middle half of each year. The seasonality appears strongest in the northwest Pacific subduction zones and weakest in the Tonga region. Taken at face value, the surplus of northern hemisphere summer events is statistically significant, but due to the ex post facto hypothesis testing, the absence of seasonality in smaller deep earthquakes, and the lack of a known physical triggering mechanism, we cannot rule out that the observed seasonality is just random chance. However, we can make a testable prediction of seasonality in future large deep-focus earthquakes, which, given likely earthquake occurrence rates, should be verified or falsified within a few decades. If confirmed, deep earthquake seasonality would challenge our current understanding of deep earthquakes.

  6. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  7. Local observations of the onset of a large earthquake: 28 June 1992 Landers, California

    USGS Publications Warehouse

    Abercrombie, Richael; Mori, Jim

    1994-01-01

    The Landers earthquake (MW 7.3) of 28 June 1992 had a very emergent onset. The first large amplitude arrivals are delayed by about 3 sec with respect to the origin time, and are preceded by smaller-scale slip. Other large earthquakes have been observed to have similar emergent onsets, but the Landers event is one of the first to be well recorded on nearby stations. We used these recordings to investigate the spatial relationship between the hypocenter and the onset of the large energy release, and to determine the slip function of the 3-sec nucleation process. Relative location of the onset of the large energy release with respect to the initial hypocenter indicates its source was between 1 and 4 km north of the hypocenter and delayed by approximately 2.5 sec. Three-station array analysis of the P wave shows that the large amplitude onset arrives with a faster apparent velocity compared to the first arrivals, indicating that the large amplitude source was several kilometers deeper than the initial onset. An ML 2.8 foreshock, located close to the hypocenter, was used as an empirical Green's function to correct for path and site effects from the first 3 sec of the mainshock seismogram. The resultant deconvolution produced a slip function that showed two subevents preceding the main energy release, an MW4.4 followed by an MW 5.6. These subevents do not appear anomalous in comparison to simple moderate-sized earthquakes, suggesting that they were normal events which just triggered or grew into a much larger earthquake. If small and moderate-sized earthquakes commonly “detonate” much larger events, this implies that the dynamic stresses during earthquake rupture are at least as important as long-term static stresses in causing earthquakes, and the prospects of reliable earthquake prediction from premonitory phenomena are not improved.

  8. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  9. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to

  10. Extending the ISC-GEM Global Earthquake Instrumental Catalogue

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Engdhal, Bob; Storchak, Dmitry; Villaseñor, Antonio; Harris, James

    2015-04-01

    After a 27-month project funded by the GEM Foundation (www.globalquakemodel.org), in January 2013 we released the ISC-GEM Global Instrumental Earthquake Catalogue (1900 2009) (www.isc.ac.uk/iscgem/index.php) as a special product to use for seismic hazard studies. The new catalogue was necessary as improved seismic hazard studies necessitate that earthquake catalogues are homogeneous (to the largest extent possible) over time in their fundamental parameters, such as location and magnitude. Due to time and resource limitation, the ISC-GEM catalogue (1900-2009) included earthquakes selected according to the following time-variable cut-off magnitudes: Ms=7.5 for earthquakes occurring before 1918; Ms=6.25 between 1918 and 1963; and Ms=5.5 from 1964 onwards. Because of the importance of having a reliable seismic input for seismic hazard studies, funding from GEM and two commercial companies in the US and UK allowed us to start working on the extension of the ISC-GEM catalogue both for earthquakes that occurred beyond 2009 and for earthquakes listed in the International Seismological Summary (ISS) which fell below the cut-off magnitude of 6.25. This extension is part of a four-year program that aims at including in the ISC-GEM catalogue large global earthquakes that occurred before the beginning of the ISC Bulletin in 1964. In this contribution we present the updated ISC GEM catalogue, which will include over 1000 more earthquakes that occurred in 2010 2011 and several hundreds more between 1950 and 1959. The catalogue extension between 1935 and 1949 is currently underway. The extension of the ISC-GEM catalogue will also be helpful for regional cross border seismic hazard studies as the ISC-GEM catalogue should be used as basis for cross-checking the consistency in location and magnitude of those earthquakes listed both in the ISC GEM global catalogue and regional catalogues.

  11. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence.

    PubMed

    Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe

    2017-09-01

    Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016-2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences.

  12. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence

    PubMed Central

    Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe

    2017-01-01

    Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016–2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences. PMID:28924610

  13. The European-Mediterranean Earthquake Catalogue (EMEC) for the last millennium

    NASA Astrophysics Data System (ADS)

    Grünthal, Gottfried; Wahlström, Rutger

    2012-07-01

    The catalogue by Grünthal et al. (J Seismol 13:517-541, 2009a) of earthquakes in central, northern, and north-western Europe with M w ≥ 3.5 (CENEC) has been expanded to cover also southern Europe and the Mediterranean area. It has also been extended in time (1000-2006). Due to the strongly increased seismicity in the new area, the threshold for events south of the latitude 44°N has here been set at M w ≥ 4.0, keeping the lower threshold in the northern catalogue part. This part has been updated with data from new and revised national and regional catalogues. The new Euro-Mediterranean Earthquake Catalogue (EMEC) is based on data from some 80 domestic catalogues and data files and over 100 special studies. Available original M w and M 0 data have been introduced. The analysis largely followed the lines of the Grünthal et al. (J Seismol 13:517-541, 2009a) study, i.e., fake and duplicate events were identified and removed, polygons were specified within each of which one or more of the catalogues or data files have validity, and existing magnitudes and intensities were converted to M w. Algorithms to compute M w are based on relations provided locally, or more commonly on those derived by Grünthal et al. (J Seismol 13:517-541, 2009a) or in the present study. The homogeneity of EMEC with respect to M w for the different constituents was investigated and improved where feasible. EMEC contains entries of some 45,000 earthquakes. For each event, the date, time, location (including focal depth if available), intensity I 0 (if given in the original catalogue), magnitude M w (with uncertainty when given), and source (catalogue or special study) are presented. Besides the main EMEC catalogue, large events before year 1000 in the SE part of the investigated area and fake events, respectively, are given in separate lists.

  14. Building vulnerability and human loss assessment in different earthquake intensity and time: a case study of the University of the Philippines, Los Baños (UPLB) Campus

    NASA Astrophysics Data System (ADS)

    Rusydy, I.; Faustino-Eslava, D. V.; Muksin, U.; Gallardo-Zafra, R.; Aguirre, J. J. C.; Bantayan, N. C.; Alam, L.; Dakey, S.

    2017-02-01

    Study on seismic hazard, building vulnerability and human loss assessment become substantial for building education institutions since the building are used by a lot of students, lecturers, researchers, and guests. The University of the Philippines, Los Banos (UPLB) located in an earthquake prone area. The earthquake could cause structural damage and injury of the UPLB community. We have conducted earthquake assessment in different magnitude and time to predict the posibility of ground shaking, building vulnerability and estimated the number of casualty of the UPLB community. The data preparation in this study includes the earthquake scenario modeling using Intensity Prediction Equations (IPEs) for shallow crustal shaking attenuation to produce intensity map of bedrock and surface. Earthquake model was generated from the segment IV and the segment X of the Valley Fault System (VFS). Building vulnerability of different type of building was calculated using fragility curve of the Philippines building. The population data for each building in various occupancy time, damage ratio, and injury ratio data were used to compute the number of casualties. The result reveals that earthquake model from the segment IV and the segment X of the VFS could generate earthquake intensity between 7.6 - 8.1 MMI in the UPLB campus. The 7.7 Mw earthquake (scenario I) from the segment IV could cause 32% - 51% damage of building and 6.5 Mw earthquake (scenario II) occurring in the segment X could cause 18% - 39% structural damage of UPLB buildings. If the earthquake occurs at 2 PM (day-time), it could injure 10.2% - 18.8% for the scenario I and could injure 7.2% - 15.6% of UPLB population in scenario II. The 5 Pm event, predicted will injure 5.1%-9.4% in the scenario I, and 3.6%-7.8% in scenario II. A nighttime event (2 Am) cause injury to students and guests who stay in dormitories. The earthquake is predicted to injure 13 - 66 students and guests in the scenario I and 9 - 47 people in the

  15. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  16. Dynamic stress changes during earthquake rupture

    USGS Publications Warehouse

    Day, S.M.; Yu, G.; Wald, D.J.

    1998-01-01

    We assess two competing dynamic interpretations that have been proposed for the short slip durations characteristic of kinematic earthquake models derived by inversion of earthquake waveform and geodetic data. The first interpretation would require a fault constitutive relationship in which rapid dynamic restrengthening of the fault surface occurs after passage of the rupture front, a hypothesized mechanical behavior that has been referred to as "self-healing." The second interpretation would require sufficient spatial heterogeneity of stress drop to permit rapid equilibration of elastic stresses with the residual dynamic friction level, a condition we refer to as "geometrical constraint." These interpretations imply contrasting predictions for the time dependence of the fault-plane shear stresses. We compare these predictions with dynamic shear stress changes for the 1992 Landers (M 7.3), 1994 Northridge (M 6.7), and 1995 Kobe (M 6.9) earthquakes. Stress changes are computed from kinematic slip models of these earthquakes, using a finite-difference method. For each event, static stress drop is highly variable spatially, with high stress-drop patches embedded in a background of low, and largely negative, stress drop. The time histories of stress change show predominantly monotonic stress change after passage of the rupture front, settling to a residual level, without significant evidence for dynamic restrengthening. The stress change at the rupture front is usually gradual rather than abrupt, probably reflecting the limited resolution inherent in the underlying kinematic inversions. On the basis of this analysis, as well as recent similar results obtained independently for the Kobe and Morgan Hill earthquakes, we conclude that, at the present time, the self-healing hypothesis is unnecessary to explain earthquake kinematics.

  17. Testing new methodologies for short -term earthquake forecasting: Multi-parameters precursors

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Tramutoli, Valerio; Lee, Lou; Liu, Tiger; Hattori, Katsumi; Kafatos, Menas

    2014-05-01

    We are conducting real-time tests involving multi-parameter observations over different seismo-tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters, namely: gas discharge; thermal infrared radiation; ionospheric electron density; and atmospheric temperature and humidity, which we believe are all associated with the earthquake preparation phase. We are testing a methodology capable to produce alerts in advance of major earthquakes (M > 5.5) in different regions of active earthquakes and volcanoes. During 2012-2013 we established a collaborative framework with PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) projects for coordinated measurements and prospective validation over seven testing regions: Southern California (USA), Eastern Honshu (Japan), Italy, Greece, Turkey, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a "stress test" opportunity to validate the physical based earthquake precursor approach over regions of high seismicity. Our initial results are: (1) Real-time tests have shown the presence of anomalies in the atmosphere and ionosphere before most of the significant (M>5.5) earthquakes; (2) False positives exist and ratios are different for each region, varying between 50% for (Southern Italy), 35% (California) down to 25% (Taiwan, Kamchatka and Japan) with a significant reduction of false positives as soon as at least two geophysical parameters are contemporarily used; (3) Main problems remain related to the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that real-time testing of physically based pre-earthquake signals provides a short-term predictive power (in all three important parameters, namely location, time and magnitude) for the occurrence of major earthquakes in the tested regions and this result encourages testing to continue with a more detailed analysis of

  18. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  19. Safety and survival in an earthquake

    USGS Publications Warehouse

    ,

    1969-01-01

    Many earth scientists in this country and abroad are focusing their studies on the search for means of predicting impending earthquakes, but, as yet, an accurate prediction of the time and place of such an event cannot be made. From past experience, however, one can assume that earthquakes will continue to harass mankind and that they will occur most frequently in the areas where they have been relatively common in the past. In the United States, earthquakes can be expected to occur most frequently in the western states, particularly in Alaska, California, Washington, Oregon, Nevada, Utah, and Montana. The danger, however, is not confined to any one part of the country; major earthquakes have occurred at widely scattered locations.

  20. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the

  1. Robust Satellite Techniques (RST) for monitoring earthquake prone areas by satellite TIR observations: The case of 1999 Chi-Chi earthquake (Taiwan)

    NASA Astrophysics Data System (ADS)

    Genzano, N.; Filizzola, C.; Paciello, R.; Pergola, N.; Tramutoli, V.

    2015-12-01

    For more than 13 years a multi-temporal data-analysis method, named Robust Satellite Techniques (RST), has been being applied to satellite Thermal InfraRed (TIR) monitoring of seismically active regions. It gives a clear definition of a TIR anomaly within a validation/confutation scheme devoted to verify if detected anomalies can be associated or not to the time and location of the occurrence of major earthquakes. In this scheme, the confutation part (i.e. verifying if similar anomalies do not occur in the absence of a significant seismic activity) assumes a role even much important than the usual validation component devoted to verify the presence of anomalous signal transients before (or in association with) specific seismic events. Since 2001, RST approach has been being used to study tens of earthquakes with a wide range of magnitudes (from 4.0 to 7.9) occurred in different continents and in various geo-tectonic settings. In this paper such a long term experience is exploited in order to give a quantitative definition of a significant sequence of TIR anomalies (SSTA) in terms of the required space-time continuity constraints (persistence), identifying also the different typologies of known spurious sequences of TIR anomalies that have to be excluded from the following validation steps. On the same basis, taking also into account for the physical models proposed for justifying the existence of a correlation between TIR anomalies and earthquakes occurrence, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability - CSEP - Project) have been defined to drive the validation process. In this work, such an approach is applied for the first time to a long-term dataset of night-time GMS-5/VISSR (Geostationary Meteorological Satellite/Visible and Infrared Spin-Scan Radiometer) TIR measurements, comparing SSTAs and earthquakes with M > 4 which occurred in a wide area around Taiwan, in the month of September of

  2. Earthquake precursors: activation or quiescence?

    NASA Astrophysics Data System (ADS)

    Rundle, John B.; Holliday, James R.; Yoder, Mark; Sachs, Michael K.; Donnellan, Andrea; Turcotte, Donald L.; Tiampo, Kristy F.; Klein, William; Kellogg, Louise H.

    2011-10-01

    We discuss the long-standing question of whether the probability for large earthquake occurrence (magnitudes m > 6.0) is highest during time periods of smaller event activation, or highest during time periods of smaller event quiescence. The physics of the activation model are based on an idea from the theory of nucleation, that a small magnitude earthquake has a finite probability of growing into a large earthquake. The physics of the quiescence model is based on the idea that the occurrence of smaller earthquakes (here considered as magnitudes m > 3.5) may be due to a mechanism such as critical slowing down, in which fluctuations in systems with long-range interactions tend to be suppressed prior to large nucleation events. To illuminate this question, we construct two end-member forecast models illustrating, respectively, activation and quiescence. The activation model assumes only that activation can occur, either via aftershock nucleation or triggering, but expresses no choice as to which mechanism is preferred. Both of these models are in fact a means of filtering the seismicity time-series to compute probabilities. Using 25 yr of data from the California-Nevada catalogue of earthquakes, we show that of the two models, activation and quiescence, the latter appears to be the better model, as judged by backtesting (by a slight but not significant margin). We then examine simulation data from a topologically realistic earthquake model for California seismicity, Virtual California. This model includes not only earthquakes produced from increases in stress on the fault system, but also background and off-fault seismicity produced by a BASS-ETAS driving mechanism. Applying the activation and quiescence forecast models to the simulated data, we come to the opposite conclusion. Here, the activation forecast model is preferred to the quiescence model, presumably due to the fact that the BASS component of the model is essentially a model for activated seismicity. These

  3. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  4. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  5. Chilean megathrust earthquake recurrence linked to frictional contrast at depth

    NASA Astrophysics Data System (ADS)

    Moreno, M.; Li, S.; Melnick, D.; Bedford, J. R.; Baez, J. C.; Motagh, M.; Metzger, S.; Vajedian, S.; Sippl, C.; Gutknecht, B. D.; Contreras-Reyes, E.; Deng, Z.; Tassara, A.; Oncken, O.

    2018-04-01

    Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw < 8) and less frequent, tsunamigenic great shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity. Deeper segments fail earlier ( 60 yr recurrence), producing moderate-size events that precede the failure of the shallower region, which fails in a great earthquake (recurrence >110 yr). We interpret the contrasting frictional strength and lag time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic hazard assessment of other subduction zones.

  6. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  7. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  8. Earthquake damage history in Israel and its close surrounding - evaluation of spatial and temporal patterns

    NASA Astrophysics Data System (ADS)

    Zohar, Motti; Salamon, Amos; Rubin, Rehav

    2017-01-01

    Israel was hit by destructive earthquakes many times in the course of history. To properly understand the hazard and support effective preparedness towards future earthquakes, we examined the spatial and temporal distribution of the resulted damage. We described in detail our systematic approach to searching the available literature, collecting the data and screening the authenticity of that information. We used GIS (Geographic Information System) to map and evaluate the distribution of the damage and to search for recurring patterns. Overall, it is found that 186 localities were hit, 54 of them at least twice. We also found that Israel was affected by 4, 17, 8 and 2 damaging earthquakes that originated, respectively, from the southern, central, central-northern and northern parts of the Dead Sea Transform (DST). The temporal appearance of the northern earthquakes is clustered; the central earthquakes are more regular in time, whereas no damage from the north-central and the central quakes, with the exception of the year 363 earthquake, seems to have occurred south of the Dead Sea region. Analyzing the distribution of the damage, we realized that the number of the damage reports reflects only half of the incidents that actually happened, attesting to incompleteness of the historical catalogue. Jerusalem is the most reported city with 14 entries, followed by Akko (Acre), Tiberias, Nablus and Tyre with 8, 7, 7 and 6 reports, respectively. In general, localities in the Galilee and north of it suffered more severely than localities in central Israel with the exception of Nablus and the localities along the coastal plain of Israel, most probably due to local site effects. For the sake of hazard management, these observations should be considered for future planning and risk mitigation.

  9. Fault Weakening due to Erosion by Fluids: A Possible Origin of Intraplate Earthquake Swarms

    NASA Astrophysics Data System (ADS)

    Vavrycuk, V.; Hrubcova, P.

    2016-12-01

    The occurrence and specific properties of earthquake swarms in geothermal areas are usually attributed to a highly fractured rock and/or heterogeneous stress within the rock mass being triggered by magmatic or hydrothermal fluid intrusion. The increase of fluid pressure destabilizes fractures and causes their opening and subsequent shear-tensile rupture. The spreading and evolution of the seismic activity is controlled by fluid flow due to diffusion in a permeable rock and/or by the redistribution of Coulomb stress. The `fluid-injection model', however, is not valid universally. We provide evidence that this model is inconsistent with observations of earthquake swarms in West Bohemia, Czech Republic. Full seismic moment tensors of micro-earthquakes in the 1997 and 2008 swarms in West Bohemia indicate that fracturing at the starting phase of the swarm was not associated with fault openings caused by pressurized fluids but rather with fault compactions. This can physically be explained by a `fluid-erosion model', when the essential role in the swarm triggering is attributed to chemical and hydrothermal fluid-rock interactions in the focal zone. Since the rock is exposed to circulating hydrothermal, CO2-saturated fluids, the walls of fractures are weakened by dissolving and altering various minerals. If fault strength lowers to a critical value, the seismicity is triggered. The fractures are compacted during failure, the fault strength recovers and a new cycle begins.

  10. Laboratory constraints on models of earthquake recurrence

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Tullis, Terry; Junger, Jenni; Kilgore, Brian; Goldsby, David

    2014-12-01

    In this study, rock friction "stick-slip" experiments are used to develop constraints on models of earthquake recurrence. Constant rate loading of bare rock surfaces in high-quality experiments produces stick-slip recurrence that is periodic at least to second order. When the loading rate is varied, recurrence is approximately inversely proportional to loading rate. These laboratory events initiate due to a slip-rate-dependent process that also determines the size of the stress drop and, as a consequence, stress drop varies weakly but systematically with loading rate. This is especially evident in experiments where the loading rate is changed by orders of magnitude, as is thought to be the loading condition of naturally occurring, small repeating earthquakes driven by afterslip, or low-frequency earthquakes loaded by episodic slip. The experimentally observed stress drops are well described by a logarithmic dependence on recurrence interval that can be cast as a nonlinear slip predictable model. The fault's rate dependence of strength is the key physical parameter. Additionally, even at constant loading rate the most reproducible laboratory recurrence is not exactly periodic, unlike existing friction recurrence models. We present example laboratory catalogs that document the variance and show that in large catalogs, even at constant loading rate, stress drop and recurrence covary systematically. The origin of this covariance is largely consistent with variability of the dependence of fault strength on slip rate. Laboratory catalogs show aspects of both slip and time predictability, and successive stress drops are strongly correlated indicating a "memory" of prior slip history that extends over at least one recurrence cycle.

  11. Volcano-earthquake interaction at Mauna Loa volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Walter, Thomas R.; Amelung, Falk

    2006-05-01

    The activity at Mauna Loa volcano, Hawaii, is characterized by eruptive fissures that propagate into the Southwest Rift Zone (SWRZ) or into the Northeast Rift Zone (NERZ) and by large earthquakes at the basal decollement fault. In this paper we examine the historic eruption and earthquake catalogues, and we test the hypothesis that the events are interconnected in time and space. Earthquakes in the Kaoiki area occur in sequence with eruptions from the NERZ, and earthquakes in the Kona and Hilea areas occur in sequence with eruptions from the SWRZ. Using three-dimensional numerical models, we demonstrate that elastic stress transfer can explain the observed volcano-earthquake interaction. We examine stress changes due to typical intrusions and earthquakes. We find that intrusions change the Coulomb failure stress along the decollement fault so that NERZ intrusions encourage Kaoiki earthquakes and SWRZ intrusions encourage Kona and Hilea earthquakes. On the other hand, earthquakes decompress the magma chamber and unclamp part of the Mauna Loa rift zone, i.e., Kaoiki earthquakes encourage NERZ intrusions, whereas Kona and Hilea earthquakes encourage SWRZ intrusions. We discuss how changes of the static stress field affect the occurrence of earthquakes as well as the occurrence, location, and volume of dikes and of associated eruptions and also the lava composition and fumarolic activity.

  12. Toggling of seismicity by the 1997 Kagoshima earthquake couplet: A demonstration of time-dependent stress transfer

    USGS Publications Warehouse

    Toda, S.; Stein, R.

    2003-01-01

    Two M ??? 6 well-recorded strike-slip earthquakes struck just 4 km and 48 days apart in Kagoshima prefecture, Japan, in 1997, providing an opportunity to study earthquake interaction. Aftershocks are abundant where the Coulomb stress is calculated to have been increased by the first event, and they abruptly stop where the stress is dropped by the second event. This ability of the main shocks to toggle seismicity on and off argues that static stress changes play a major role in exciting aftershocks, whereas the dynamic Coulomb stresses, which should only promote seismicity, appear to play a secondary role. If true, the net stress changes from a sequence of earthquakes might be expected to govern the subsequent seismicity distribution. However, adding the stress changes from the two Kagoshima events does not fully capture the ensuing seismicity, such as its rate change, temporal decay, or migration away from the ends of the ruptures. We therefore implement a stress transfer model that incorporates rate/state friction, in which seismicity is treated as a sequence of independent nucleation events that are dependent on the fault slip, slip rate, and elapsed time since the last event. The model reproduces the temporal response of seismicity to successive stress changes, including toggling, decay, and aftershock migration. Nevertheless, the match of observed to predicted seismicity is quite imperfect, due perhaps to inadequate knowledge of several model parameters. However, to demonstrate the potential of this approach, we build a probabilistic forecast of larger earthquakes on the expected rate of small aftershocks, taking advantage of the large statistical sample the small shocks afford. Not surprisingly, such probabilities are highly time- and location-dependent: During the first decade after the main shocks, the seismicity rate and the chance of successive large shocks are about an order of magnitude higher than the background rate and are concentrated exclusively in

  13. Tsunami potential assessment based on rupture zones, focal mechanisms and repeat times of strong earthquakes in the major Atlantic-Mediterranean seismic fracture zone

    NASA Astrophysics Data System (ADS)

    Agalos, Apostolos; Papadopoulos, Gerassimos A.; Kijko, Andrzej; Papageorgiou, Antonia; Smit, Ansie; Triantafyllou, Ioanna

    2016-04-01

    In the major Atlantic-Mediterranean seismic fracture zone, extended from Azores islands in the west to the easternmost Mediterranean Sea in the east, including the Marmara and Black Seas, a number of 22 tsunamigenic zones have been determined from historical and instrumental tsunami documentation. Although some tsunamis were produced by volcanic activity or landslides, the majority of them was generated by strong earthquakes. Since the generation of seismic tsunamis depends on several factors, like the earthquake size, focal depth and focal mechanism, the study of such parameters is of particular importance for the assessment of the potential for the generation of future tsunamis. However, one may not rule out the possibility for tsunami generation in areas outside of the 22 zones determined so far. For the Atlantic-Mediterranean seismic fracture zone we have compiled a catalogue of strong, potentially tsunamigenic (focal depth less than 100 km) historical earthquakes from various data bases and other sources. The lateral areas of rupture zones of these earthquakes were determined. Rupture zone is the area where the strain after the earthquake has dropped substantially with respect the strain before the earthquake. Aftershock areas were assumed to determine areas of rupture zones for instrumental earthquakes. For historical earthquakes macroseismic criteria were used such as spots of higher-degree seismic intensity and of important ground failures. For the period of instrumental seismicity, focal mechanism solutions from CMT, EMMA and other data bases were selected for strong earthquakes. From the geographical distribution of seismic rupture zones and the corresponding focal mechanisms in the entire Atlantic-Mediterranean seismic fracture zone we determined potentially tsunamigenic zones regardless they are known to have produced seismic tsunamis in the past or not. An attempt has been made to calculate in each one of such zones the repeat times of strong

  14. Prospective testing of Coulomb short-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  15. Three-dimensional deformation caused by the Bam, Iran, earthquake and the origin of shallow slip deficit.

    PubMed

    Fialko, Yuri; Sandwell, David; Simons, Mark; Rosen, Paul

    2005-05-19

    Our understanding of the earthquake process requires detailed insights into how the tectonic stresses are accumulated and released on seismogenic faults. We derive the full vector displacement field due to the Bam, Iran, earthquake of moment magnitude 6.5 using radar data from the Envisat satellite of the European Space Agency. Analysis of surface deformation indicates that most of the seismic moment release along the 20-km-long strike-slip rupture occurred at a shallow depth of 4-5 km, yet the rupture did not break the surface. The Bam event may therefore represent an end-member case of the 'shallow slip deficit' model, which postulates that coseismic slip in the uppermost crust is systematically less than that at seismogenic depths (4-10 km). The InSAR-derived surface displacement data from the Bam and other large shallow earthquakes suggest that the uppermost section of the seismogenic crust around young and developing faults may undergo a distributed failure in the interseismic period, thereby accumulating little elastic strain.

  16. Earthquake correlations and networks: A comparative study

    NASA Astrophysics Data System (ADS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  17. Real-Time seismic waveforms monitoring with BeiDou Navigation Satellite System (BDS) observations for the 2015 Mw 7.8 Nepal earthquake

    NASA Astrophysics Data System (ADS)

    Geng, T.

    2015-12-01

    Nowadays more and more high-rate Global Navigation Satellite Systems (GNSS) data become available in real time, which provide more opportunities to monitor the seismic waveforms. China's GNSS, BeiDou Navigation Satellite System (BDS), has already satisfied the requirement of stand-alone precise positioning in Asia-Pacific region with 14 in-orbit satellites, which promisingly suggests that BDS could be applied to the high-precision earthquake monitoring as GPS. In the present paper, real-time monitoring of seismic waveforms using BDS measurements is assessed. We investigate a so-called "variometric" approach to measure real-time seismic waveforms with high-rate BDS observations. This approach is based on time difference technique and standard broadcast products which are routinely available in real time. The 1HZ BDS data recorded by Beidou Experimental Tracking Stations (BETS) during the 2015 Mw 7.8 Nepal earthquake is analyzed. The results indicate that the accuracies of velocity estimation from BDS are 2-3 mm/s in horizontal components and 8-9 mm/s in vertical component, respectively, which are consistent with GPS. The seismic velocity waveforms during earthquake show good agreement between BDS and GPS. Moreover, the displacement waveforms is reconstructed by an integration of velocity time series with trend removal. The displacement waveforms with the accuracy of 1-2 cm are derived by comparing with post-processing GPS precise point positioning (PPP).

  18. Earthquake Relocation in the Middle East with Geodetically-Calibrated Events

    NASA Astrophysics Data System (ADS)

    Brengman, C.; Barnhart, W. D.

    2017-12-01

    Regional and global earthquake catalogs in tectonically active regions commonly contain mislocated earthquakes that impede efforts to address first order characteristics of seismogenic strain release and to monitor anthropogenic seismic events through the Comprehensive Nuclear-Test-Ban Treaty. Earthquake mislocations are particularly limiting in the plate boundary zone between the Arabia and Eurasia plates of Iran, Pakistan, and Turkey where earthquakes are commonly mislocated by 20+ kilometers and hypocentral depths are virtually unconstrained. Here, we present preliminary efforts to incorporate calibrated earthquake locations derived from Interferometric Synthetic Aperture Radar (InSAR) observations into a relocated catalog of seismicity in the Middle East. We use InSAR observations of co-seismic deformation to determine the locations, geometries, and slip distributions of small to moderate magnitude (M4.8+) crustal earthquakes. We incorporate this catalog of calibrated event locations, along with other seismologically-calibrated earthquake locations, as "priors" into a fully Bayesian multi-event relocation algorithm that relocates all teleseismically and regionally recorded earthquakes over the time span 1970-2017, including calibrated and uncalibrated events. Our relocations are conducted using cataloged phase picks and BayesLoc. We present a suite of sensitivity tests for the time span of 2003-2014 to explore the impacts of our input parameters (i.e., how a point source is defined from a finite fault inversion) on the behavior of the event relocations, potential improvements to depth estimates, the ability of the relocation to recover locations outside of the time span in which there are InSAR observations, and the degree to which our relocations can recover "known" calibrated earthquake locations that are not explicitly included as a-priori constraints. Additionally, we present a systematic comparison of earthquake relocations derived from phase picks of two

  19. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  20. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    PubMed

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  1. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    PubMed Central

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  2. Thermal Infrared Anomalies of Several Strong Earthquakes

    PubMed Central

    Wei, Congxin; Guo, Xiao; Qin, Manzhong

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of “time-frequency relative power spectrum.” (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting. PMID:24222728

  3. Thermal infrared anomalies of several strong earthquakes.

    PubMed

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  4. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  5. Napa Earthquake impact on water systems

    NASA Astrophysics Data System (ADS)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  6. Acceleration spectra for subduction zone earthquakes

    USGS Publications Warehouse

    Boatwright, J.; Choy, G.L.

    1989-01-01

    We estimate the source spectra of shallow earthquakes from digital recordings of teleseismic P wave groups, that is, P+pP+sP, by making frequency dependent corrections for the attenuation and for the interference of the free surface. The correction for the interference of the free surface assumes that the earthquake radiates energy from a range of depths. We apply this spectral analysis to a set of 12 subduction zone earthquakes which range in size from Ms = 6.2 to 8.1, obtaining corrected P wave acceleration spectra on the frequency band from 0.01 to 2.0 Hz. Seismic moment estimates from surface waves and normal modes are used to extend these P wave spectra to the frequency band from 0.001 to 0.01 Hz. The acceleration spectra of large subduction zone earthquakes, that is, earthquakes whose seismic moments are greater than 1027 dyn cm, exhibit intermediate slopes where u(w)???w5/4 for frequencies from 0.005 to 0.05 Hz. For these earthquakes, spectral shape appears to be a discontinuous function of seismic moment. Using reasonable assumptions for the phase characteristics, we transform the spectral shape observed for large earthquakes into the time domain to fit Ekstrom's (1987) moment rate functions for the Ms=8.1 Michoacan earthquake of September 19, 1985, and the Ms=7.6 Michoacan aftershock of September 21, 1985. -from Authors

  7. The behaviour of reinforced concrete structure due to earthquake load using Time History analysis Method

    NASA Astrophysics Data System (ADS)

    Afifuddin, M.; Panjaitan, M. A. R.; Ayuna, D.

    2017-02-01

    Earthquakes are one of the most dangerous, destructive and unpredictable natural hazards, which can leave everything up to a few hundred kilometres in complete destruction in seconds. Indonesia has a unique position as an earthquake prone country. It is the place of the interaction for three tectonic plates, namely the Indo-Australian, Eurasian and Pacific plates. Banda Aceh is one of the cities that located in earthquake-prone areas. Due to the vulnerable conditions of Banda Aceh some efforts have been exerted to reduce these unfavourable conditions. Many aspects have been addressed, starting from community awareness up to engineering solutions. One of them is all buildings that build in the city should be designed as an earthquake resistant building. The objectives of this research are to observe the response of a reinforced concrete structure due to several types of earthquake load, and to see the performance of the structure after earthquake loads applied. After Tsunami in 2004 many building has been build, one of them is a hotel building located at simpang lima. The hotel is made of reinforced concrete with a height of 34.95 meters with a total area of 8872.5 m2 building. So far this building was the tallest building in Banda Aceh.

  8. The USGS Earthquake Notification Service (ENS): Customizable notifications of earthquakes around the globe

    USGS Publications Warehouse

    Wald, Lisa A.; Wald, David J.; Schwarz, Stan; Presgrave, Bruce; Earle, Paul S.; Martinez, Eric; Oppenheimer, David

    2008-01-01

    At the beginning of 2006, the U.S. Geological Survey (USGS) Earthquake Hazards Program (EHP) introduced a new automated Earthquake Notification Service (ENS) to take the place of the National Earthquake Information Center (NEIC) "Bigquake" system and the various other individual EHP e-mail list-servers for separate regions in the United States. These included northern California, southern California, and the central and eastern United States. ENS is a "one-stop shopping" system that allows Internet users to subscribe to flexible and customizable notifications for earthquakes anywhere in the world. The customization capability allows users to define the what (magnitude threshold), the when (day and night thresholds), and the where (specific regions) for their notifications. Customization is achieved by employing a per-user based request profile, allowing the notifications to be tailored for each individual's requirements. Such earthquake-parameter-specific custom delivery was not possible with simple e-mail list-servers. Now that event and user profiles are in a structured query language (SQL) database, additional flexibility is possible. At the time of this writing, ENS had more than 114,000 subscribers, with more than 200,000 separate user profiles. On a typical day, more than 188,000 messages get sent to a variety of widely distributed users for a wide range of earthquake locations and magnitudes. The purpose of this article is to describe how ENS works, highlight the features it offers, and summarize plans for future developments.

  9. The plan to coordinate NEHRP post-earthquake investigations

    USGS Publications Warehouse

    Holzer, Thomas L.; Borcherdt, Roger D.; Comartin, Craig D.; Hanson, Robert D.; Scawthorn, Charles R.; Tierney, Kathleen; Youd, T. Leslie

    2003-01-01

    This is the plan to coordinate domestic and foreign post-earthquake investigations supported by the National Earthquake Hazards Reduction Program (NEHRP). The plan addresses coordination of both the NEHRP agencies—Federal Emergency Management Agency (FEMA), National Institute of Standards and Technology (NIST), National Science Foundation (NSF), and U. S. Geological Survey (USGS)—and their partners. The plan is a framework for both coordinating what is going to be done and identifying responsibilities for post-earthquake investigations. It does not specify what will be done. Coordination is addressed in various time frames ranging from hours to years after an earthquake. The plan includes measures for (1) gaining rapid and general agreement on high-priority research opportunities, and (2) conducting the data gathering and fi eld studies in a coordinated manner. It deals with identifi cation, collection, processing, documentation, archiving, and dissemination of the results of post-earthquake work in a timely manner and easily accessible format.

  10. The Role of Deep Creep in the Timing of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Sammis, C. G.; Smith, S. W.

    2012-12-01

    The observed temporal clustering of the world's largest earthquakes has been largely discounted for two reasons: a) it is consistent with Poisson clustering, and b) no physical mechanism leading to such clustering has been proposed. This lack of a mechanism arises primarily because the static stress transfer mechanism, commonly used to explain aftershocks and the clustering of large events on localized fault networks, does not work at global distances. However, there is recent observational evidence that the surface waves from large earthquakes trigger non-volcanic tremor at the base of distant fault zones at global distances. Based on these observations, we develop a simple non-linear coupled oscillator model that shows how the triggering of such tremor can lead to the synchronization of large earthquakes on a global scale. A basic assumption of the model is that induced tremor is a proxy for deep creep that advances the seismic cycle of the fault. We support this hypothesis by demonstrating that the 2010 Maule Chile and the 2011 Fukushima Japan earthquakes, which have been shown to induce tremor on the Parkfield segment of the San Andreas Fault, also produce changes in off-fault seismicity that are spatially and temporally consistent with episodes of deep creep on the fault. The observed spatial pattern can be simulated using an Okada dislocation model for deep creep (below 20 km) on the fault plane in which the slip rate decreases from North to South consistent with surface creep measurements and deepens south of the "Parkfield asperity" as indicated by recent tremor locations. The model predicts the off-fault events should have reverse mechanism consistent with observed topography.

  11. USGS approach to real-time estimation of earthquake-triggered ground failure - Results of 2015 workshop

    USGS Publications Warehouse

    Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.

    2016-03-30

    The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.

  12. Populating the Advanced National Seismic System Comprehensive Earthquake Catalog

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Perry, M. R.; Andrews, J. R.; Withers, M. M.; Hellweg, M.; Kim, W. Y.; Shiro, B.; West, M. E.; Storchak, D. A.; Pankow, K. L.; Huerfano Moreno, V. A.; Gee, L. S.; Wolfe, C. J.

    2016-12-01

    The U.S. Geological Survey maintains a repository of earthquake information produced by networks in the Advanced National Seismic System with additional data from the ISC-GEM catalog and many non-U.S. networks through their contributions to the National Earthquake Information Center PDE bulletin. This Comprehensive Catalog (ComCat) provides a unified earthquake product while preserving attribution and contributor information. ComCat contains hypocenter and magnitude information with supporting phase arrival-time and amplitude measurements (when available). Higher-level products such as focal mechanisms, earthquake slip models, "Did You Feel It?" reports, ShakeMaps, PAGER impact estimates, earthquake summary posters, and tectonic summaries are also included. ComCat is updated as new events are processed and the catalog can be accesed at http://earthquake.usgs.gov/earthquakes/search/. Throughout the past few years, a concentrated effort has been underway to expand ComCat by integrating global and regional historic catalogs. The number of earthquakes in ComCat has more than doubled in the past year and it presently contains over 1.6 million earthquake hypocenters. We will provide an overview of catalog contents and a detailed description of numerous tools and semi-automated quality-control procedures developed to uncover errors including systematic magnitude biases, missing time periods, duplicate postings for the same events, and incorrectly associated events.

  13. Differential energy radiation from two earthquakes in Japan with identical Mw: The Kyushu 1996 and Tottori 2000 earthquakes

    USGS Publications Warehouse

    Choy, G.L.; Boatwright, J.

    2009-01-01

    We examine two closely located earthquakes in Japan that had identical moment magnitudes Mw but significantly different energy magnitudes Me. We use teleseismic data from the Global Seismograph Network and strong-motion data from the National Research Institute for Earth Science and Disaster Prevention's K-Net to analyze the 19 October 1996 Kyushu earthquake (Mw 6.7, Me 6.6) and the 6 October 2000 Tottori earthquake (Mw 6.7, Me 7.4). To obtain regional estimates of radiated energy ES we apply a spectral technique to regional (<200 km) waveforms that are dominated by S and Lg waves. For the thrust-fault Kyushu earthquake, we estimate an average regional attenuation Q(f) 230f0:65. For the strike-slip Tottori earthquake, the average regional attenuation is Q(f) 180f0:6. These attenuation functions are similar to those derived from studies of both California and Japan earthquakes. The regional estimate of ES for the Kyushu earthquake, 3:8 ?? 1014 J, is significantly smaller than that for the Tottori earthquake, ES 1:3 ?? 1015 J. These estimates correspond well with the teleseismic estimates of 3:9 ?? 1014 J and 1:8 ?? 1015 J, respectively. The apparent stress (Ta = ??Es/M0 with ?? equal to rigidity) for the Kyushu earthquake is 4 times smaller than the apparent stress for the Tottori earthquake. In terms of the fault maturity model, the significantly greater release of energy by the strike-slip Tottori earthquake can be related to strong deformation in an immature intraplate setting. The relatively lower energy release of the thrust-fault Kyushu earthquake can be related to rupture on mature faults at a subduction environment. The consistence between teleseismic and regional estimates of ES is particularly significant as teleseismic data for computing ES are routinely available for all large earthquakes whereas often there are no near-field data.

  14. In search of earthquake-related hydrologic and chemical changes along Hayward Fault

    USGS Publications Warehouse

    King, C.-Y.; Basler, D.; Presser, T.S.; Evans, William C.; White, L.D.; Minissale, A.

    1994-01-01

    Flow and chemical measurements have been made about once a month, and more frequently when required, since 1976 at two springs in Alum Rock Park in eastern San Jose, California, and since 1980 at two shallow wells in eastern Oakland in search of earthquake-related changes. All sites are on or near the Hayward Fault and are about 55 km apart. Temperature, electric conductivity, and water level or flow rate were measured in situ with portable instruments. Water samples were collected for later chemical and isotopic analyses in the laboratory. The measured flow rate at one of the springs showed a long-term decrease of about 40% since 1987, when a multi-year drought began in California. It also showed several increases that lasted a few days to a few months with amplitudes of 2.4 to 8.6 times the standard deviations above the background rate. Five of these increases were recorded shortly after nearby earthquakes of magnitude 5.0 or larger, and may have resulted from unclogging of the flow path and increase of permeability caused by strong seismic shaking. Two other flow increases were possibly induced by exceptionally heavy rainfalls. The water in both wells showed seasonal temperature and chemical variations, largely in response to rainfall. In 1980 the water also showed some clear chemical changes unrelated to rainfall that lasted a few months; these changes were followed by a magnitude 4 earthquake 37 km away. The chemical composition at one of the wells and at the springs also showed some longer-term variations that were not correlated with rainfall but possibly correlated with the five earthquakes mentioned above. These correlations suggest a common tectonic origin for the earthquakes and the anomalies. The last variation at the affected well occurred abruptly in 1989, shortly before a magnitude 5.0 earthquake 54 km away. ?? 1993.

  15. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  16. Geological evidence for Holocene earthquakes and tsunamis along the Nankai-Suruga Trough, Japan

    NASA Astrophysics Data System (ADS)

    Garrett, Ed; Fujiwara, Osamu; Garrett, Philip; Heyvaert, Vanessa M. A.; Shishikura, Masanobu; Yokoyama, Yusuke; Hubert-Ferrari, Aurélia; Brückner, Helmut; Nakamura, Atsunori; De Batist, Marc

    2016-04-01

    The Nankai-Suruga Trough, lying immediately south of Japan's densely populated and highly industrialised southern coastline, generates devastating great earthquakes (magnitude > 8). Intense shaking, crustal deformation and tsunami generation accompany these ruptures. Forecasting the hazards associated with future earthquakes along this >700 km long fault requires a comprehensive understanding of past fault behaviour. While the region benefits from a long and detailed historical record, palaeoseismology has the potential to provide a longer-term perspective and additional insights. Here, we summarise the current state of knowledge regarding geological evidence for past earthquakes and tsunamis, incorporating literature originally published in both Japanese and English. This evidence comes from a wide variety of sources, including uplifted marine terraces and biota, marine and lacustrine turbidites, liquefaction features, subsided marshes and tsunami deposits in coastal lakes and lowlands. We enhance available results with new age modelling approaches. While publications describe proposed evidence from > 70 sites, only a limited number provide compelling, well-dated evidence. The best available records allow us to map the most likely rupture zones of eleven earthquakes occurring during the historical period. Our spatiotemporal compilation suggests the AD 1707 earthquake ruptured almost the full length of the subduction zone and that earthquakes in AD 1361 and 684 were predecessors of similar magnitude. Intervening earthquakes were of lesser magnitude, highlighting variability in rupture mode. Recurrence intervals for ruptures of the a single seismic segment range from less than 100 to more than 450 years during the historical period. Over longer timescales, palaeoseismic evidence suggests intervals ranging from 100 to 700 years. However, these figures reflect thresholds of evidence creation and preservation as well as genuine recurrence intervals. At present, we have

  17. Brady's Geothermal Field Nodal Seismometer Earthquake Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Feigl

    90-second records of data from 238 three-component nodal seismometer deployed at Bradys geothermal field. The time window catches an earthquake arrival. Earthquake data from USGS online catalog: Magnitude: 4.3 ml +/- 0.4 Location: 38.479 deg N 118.366 deg W +/- 0.7 km Depth: 9.9 km +/- 0.7 Date and Time: 2016-03-21 07:37:10.535 UTC

  18. The Effect of Earthquakes on Episodic Tremor and Slip Events on the Southern Cascadia Subduction Zone

    NASA Astrophysics Data System (ADS)

    Sainvil, A. K.; Schmidt, D. A.; Nuyen, C.

    2017-12-01

    The goal of this study is to explore how slow slip events on the southern Cascadia Subduction Zone respond to nearby, offshore earthquakes by examining GPS and tremor data. At intermediate depths on the plate interface ( 40 km), transient fault slip is observed in the form of Episodic Tremor and Slip (ETS) events. These ETS events occur regularly (every 10 months), and have a longer duration than normal earthquakes. Researchers have been documenting slow slip events through data obtained by continuously running GPS stations in the Pacific Northwest. Some studies have proposed that pore fluid may play a role in these ETS events by lowering the effective stress on the fault. The interaction of earthquakes and ETS can provide constraints on the strength of the fault and the level of stress needed to alter ETS behavior. Earthquakes can trigger ETS events, but the connection between these events and earthquake activity is less understood. We originally hypothesized that ETS events would be affected by earthquakes in southern Cascadia, and could result in a shift in the recurrence interval of ETS events. ETS events were cataloged using GPS time series provided by PANGA, in conjunction with tremor positions, in Southern Cascadia for stations YBHB and DDSN from 1997 to 2017. We looked for evidence of change from three offshore earthquakes that occurred near the Mendocino Triple Junction with moment magnitudes of 7.2 in 2005, 6.5 in 2010, and 6.8 in 2014. Our results showed that the recurrence interval of ETS for stations YBHB and DDSN was not altered by the three earthquake events. Future is needed to explore whether this lack of interaction is explained by the non-optimal orientation of the receiver fault for the earthquake focal mechanisms.

  19. Investigation of Backprojection Uncertainties With M6 Earthquakes

    NASA Astrophysics Data System (ADS)

    Fan, Wenyuan; Shearer, Peter M.

    2017-10-01

    We investigate possible biasing effects of inaccurate timing corrections on teleseismic P wave backprojection imaging of large earthquake ruptures. These errors occur because empirically estimated time shifts based on aligning P wave first arrivals are exact only at the hypocenter and provide approximate corrections for other parts of the rupture. Using the Japan subduction zone as a test region, we analyze 46 M6-M7 earthquakes over a 10 year period, including many aftershocks of the 2011 M9 Tohoku earthquake, performing waveform cross correlation of their initial P wave arrivals to obtain hypocenter timing corrections to global seismic stations. We then compare backprojection images for each earthquake using its own timing corrections with those obtained using the time corrections from other earthquakes. This provides a measure of how well subevents can be resolved with backprojection of a large rupture as a function of distance from the hypocenter. Our results show that backprojection is generally very robust and that the median subevent location error is about 25 km across the entire study region (˜700 km). The backprojection coherence loss and location errors do not noticeably converge to zero even when the event pairs are very close (<20 km). This indicates that most of the timing differences are due to 3-D structure close to each of the hypocenter regions, which limits the effectiveness of attempts to refine backprojection images using aftershock calibration, at least in this region.

  20. A Comparison of Geodetic and Geologic Rates Prior to Large Strike-Slip Earthquakes: A Diversity of Earthquake-Cycle Behaviors?

    NASA Astrophysics Data System (ADS)

    Dolan, James F.; Meade, Brendan J.

    2017-12-01

    Comparison of preevent geodetic and geologic rates in three large-magnitude (Mw = 7.6-7.9) strike-slip earthquakes reveals a wide range of behaviors. Specifically, geodetic rates of 26-28 mm/yr for the North Anatolian fault along the 1999 MW = 7.6 Izmit rupture are ˜40% faster than Holocene geologic rates. In contrast, geodetic rates of ˜6-8 mm/yr along the Denali fault prior to the 2002 MW = 7.9 Denali earthquake are only approximately half as fast as the latest Pleistocene-Holocene geologic rate of ˜12 mm/yr. In the third example where a sufficiently long pre-earthquake geodetic time series exists, the geodetic and geologic rates along the 2001 MW = 7.8 Kokoxili rupture on the Kunlun fault are approximately equal at ˜11 mm/yr. These results are not readily explicable with extant earthquake-cycle modeling, suggesting that they may instead be due to some combination of regional kinematic fault interactions, temporal variations in the strength of lithospheric-scale shear zones, and/or variations in local relative plate motion rate. Whatever the exact causes of these variable behaviors, these observations indicate that either the ratio of geodetic to geologic rates before an earthquake may not be diagnostic of the time to the next earthquake, as predicted by many rheologically based geodynamic models of earthquake-cycle behavior, or different behaviors characterize different fault systems in a manner that is not yet understood or predictable.

  1. Seismogeodesy for rapid earthquake and tsunami characterization

    NASA Astrophysics Data System (ADS)

    Bock, Y.

    2016-12-01

    Rapid estimation of earthquake magnitude and fault mechanism is critical for earthquake and tsunami warning systems. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of earthquake onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict earthquake early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of

  2. Scaling of seismic memory with earthquake size

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel; Podobnik, Boris; Tamura, Yoshiyasu; Stanley, H. Eugene

    2012-07-01

    It has been observed that discrete earthquake events possess memory, i.e., that events occurring in a particular location are dependent on the history of that location. We conduct an analysis to see whether continuous real-time data also display a similar memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic wave form database recorded by 64 stations in Japan, including the 2011 “Great East Japan Earthquake,” one of the five most powerful earthquakes ever recorded, which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the wave form sign series show power-law anticorrelations while the interval series show power-law correlations. We find size dependence in earthquake autocorrelations: as the earthquake size increases, both of these correlation behaviors strengthen. We also find that the DFA scaling exponent α has no dependence on the earthquake hypocenter depth or epicentral distance.

  3. Prompt identification of tsunamigenic earthquakes from 3-component seismic data

    NASA Astrophysics Data System (ADS)

    Kundu, Ajit; Bhadauria, Y. S.; Basu, S.; Mukhopadhyay, S.

    2016-10-01

    An Artificial Neural Network (ANN) based algorithm for prompt identification of shallow focus (depth < 70 km) tsunamigenic earthquakes at a regional distance is proposed in the paper. The promptness here refers to decision making as fast as 5 min after the arrival of LR phase in the seismogram. The root mean square amplitudes of seismic phases recorded by a single 3-component station have been considered as inputs besides location and magnitude. The trained ANN has been found to categorize 100% of the new earthquakes successfully as tsunamigenic or non-tsunamigenic. The proposed method has been corroborated by an alternate mapping technique of earthquake category estimation. The second method involves computation of focal parameters, estimation of water volume displaced at the source and eventually deciding category of the earthquake. The method has been found to identify 95% of the new earthquakes successfully. Both the methods have been tested using three component broad band seismic data recorded at PALK (Pallekele, Sri Lanka) station provided by IRIS for earthquakes originating from Sumatra region of magnitude 6 and above. The fair agreement between the methods ensures that a prompt alert system could be developed based on proposed method. The method would prove to be extremely useful for the regions that are not adequately instrumented for azimuthal coverage.

  4. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

  5. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  6. Precursory slow-slip loaded the 2009 L'Aquila earthquake sequence

    NASA Astrophysics Data System (ADS)

    Borghi, A.; Aoudia, A.; Javed, F.; Barzaghi, R.

    2016-05-01

    Slow-slip events (SSEs) are common at subduction zone faults where large mega earthquakes occur. We report here that one of the best-recorded moderate size continental earthquake, the 2009 April 6 moment magnitude (Mw) 6.3 L'Aquila (Italy) earthquake, was preceded by a 5.9 Mw SSE that originated from the decollement beneath the reactivated normal faulting system. The SSE is identified from a rigorous analysis of continuous GPS stations and occurred on the 12 February and lasted for almost two weeks. It coincided with a burst in the foreshock activity with small repeating earthquakes migrating towards the main-shock hypocentre as well as with a change in the elastic properties of rocks in the fault region. The SSE has caused substantial stress loading at seismogenic depths where the magnitude 4.0 foreshock and Mw 6.3 main shock nucleated. This stress loading is also spatially correlated with the lateral extent of the aftershock sequence.

  7. Precise relative locations for earthquakes in the northeast Pacific region

    DOE PAGES

    Cleveland, K. Michael; VanDeMark, Thomas F.; Ammon, Charles J.

    2015-10-09

    We report that double-difference methods applied to cross-correlation measured Rayleigh wave time shifts are an effective tool to improve epicentroid locations and relative origin time shifts in remote regions. We apply these methods to seismicity offshore of southwestern Canada and the U.S. Pacific Northwest, occurring along the boundaries of the Pacific and Juan de Fuca (including the Explorer Plate and Gorda Block) Plates. The Blanco, Mendocino, Revere-Dellwood, Nootka, and Sovanco fracture zones host the majority of this seismicity, largely consisting of strike-slip earthquakes. The Explorer, Juan de Fuca, and Gorda spreading ridges join these fracture zones and host normal faultingmore » earthquakes. Our results show that at least the moderate-magnitude activity clusters along fault strike, supporting suggestions of large variations in seismic coupling along oceanic transform faults. Our improved relative locations corroborate earlier interpretations of the internal deformation in the Explorer and Gorda Plates. North of the Explorer Plate, improved locations support models that propose northern extension of the Revere-Dellwood fault. Relocations also support interpretations that favor multiple parallel active faults along the Blanco Transform Fault Zone. Seismicity of the western half of the Blanco appears more scattered and less collinear than the eastern half, possibly related to fault maturity. We use azimuthal variations in the Rayleigh wave cross-correlation amplitude to detect and model rupture directivity for a moderate size earthquake along the eastern Blanco Fault. Lastly, the observations constrain the seismogenic zone geometry and suggest a relatively narrow seismogenic zone width of 2 to 4 km.« less

  8. An improvement of the Earthworm Based Earthquake Alarm Reporting system in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, D. Y.; Hsiao, N. C.; Yih-Min, W.

    2017-12-01

    The Central Weather Bureau of Taiwan (CWB) has operated the Earthworm Based Earthquake Alarm Reporting (eBEAR) system for the purpose of earthquake early warning (EEW). The system has been used to report EEW messages to the general public since 2016 through text message from the mobile phones and the television programs. The system for inland earthquakes is able to provide accurate and fast warnings. The average epicenter error is about 5 km and the processing time is about 15 seconds. The epicenter error is defined as the distance between the epicenter estimated by the EEW system and the epicenter estimated by man. The processing time is defined as the time difference between the time earthquakes occurred and the time the system issued warning. The CWB seismic network consist about 200 seismic stations. In some area of Taiwan the distance between each seismic station is about 10 km. It means that when an earthquake occurred the seismic P wave is able to propagate through 6 stations, which is the minimum number of required stations in the EEW system, within 20 km. If the latency of data transmitting is about 1 sec, the P-wave velocity is about 6 km per sec and we take 3-sec length time window to estimate earthquake magnitude, then the processing should be around 8 sec. In fact, however, the average processing time is larger than this figure. Because some outliers of P-wave onset picks may exist in the beginning of the earthquake occurrence, the Geiger's method we used in the EEW system for earthquake location is not stable. It usually takes more time to wait for enough number of good picks. In this study we used grid search method to improve the estimations of earthquake location. The MAXEL algorithm (Sheen et al., 2015, 2016) was tested in the EEW system by simulating historical earthquakes occurred in Taiwan. The results show the processing time can be reduced and the location accuracy is acceptable for EEW purpose.

  9. The California Earthquake Advisory Plan: A history

    USGS Publications Warehouse

    Roeloffs, Evelyn A.; Goltz, James D.

    2017-01-01

    Since 1985, the California Office of Emergency Services (Cal OES) has issued advisory statements to local jurisdictions and the public following seismic activity that scientists on the California Earthquake Prediction Evaluation Council view as indicating elevated probability of a larger earthquake in the same area during the next several days. These advisory statements are motivated by statistical studies showing that about 5% of moderate earthquakes in California are followed by larger events within a 10-km, five-day space-time window (Jones, 1985; Agnew and Jones, 1991; Reasenberg and Jones, 1994). Cal OES issued four earthquake advisories from 1985 to 1989. In October, 1990, the California Earthquake Advisory Plan formalized this practice, and six Cal OES Advisories have been issued since then. This article describes that protocol’s scientific basis and evolution.

  10. PAGER - Rapid Assessment of an Earthquake's Impact

    USGS Publications Warehouse

    Earle, Paul S.; Wald, David J.

    2007-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system to rapidly assess the number of people and regions exposed to severe shaking by an earthquake, and inform emergency responders, government agencies, and the media to the scope of the potential disaster. PAGER monitors the U.S. Geological Survey?s near real-time U.S. and global earthquake detections and automatically identifies events that are of societal importance, well in advance of ground-truth or news accounts.

  11. Moment-tensor solutions for the 24 November 1987 Superstition Hills, California, earthquakes

    USGS Publications Warehouse

    Sipkin, S.A.

    1989-01-01

    The teleseismic long-period waveforms recorded by the Global Digital Seismograph Network from the two largest Superstition Hills earthquakes are inverted using an algorithm based on optimal filter theory. These solutions differ slightly from those published in the Preliminary Determination of Epicenters Monthly Listing because a somewhat different, improved data set was used in the inversions and a time-dependent moment-tensor algorithm was used to investigate the complexity of the main shock. The foreshock (origin time 01:54:14.5, mb 5.7, Ms6.2) had a scalar moment of 2.3 ?? 1025 dyne-cm, a depth of 8km, and a mechanism of strike 217??, dip 79??, rake 4??. The main shock (origin time 13:15:56.4, mb 6.0, Ms6.6) was a complex event, consisting of at least two subevents, with a combined scalar moment of 1.0 ?? 1026 dyne-cm, a depth of 10km, and a mechanism of strike 303??, dip 89??, rake -180??. -Authors

  12. Flash-sourcing or the rapid detection and characterisation of earthquake effects through clickstream data analysis

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Mazet-Roux, G.; Roussel, F.; Frobert, L.

    2011-12-01

    Rapid characterisation of earthquake effects is essential for a timely and appropriate response in favour of victims and/or of eyewitnesses. In case of damaging earthquakes, any field observations that can fill the information gap characterising their immediate aftermath can contribute to more efficient rescue operations. This paper presents the last developments of a method called "flash-sourcing" addressing these issues. It relies on eyewitnesses, the first informed and the first concerned by an earthquake occurrence. More precisely, their use of the EMSC earthquake information website (www.emsc-csem.org) is analysed in real time to map the area where the earthquake was felt and identify, at least under certain circumstances zones of widespread damage. The approach is based on the natural and immediate convergence of eyewitnesses on the website who rush to the Internet to investigate cause of the shaking they just felt causing our traffic to increase The area where an earthquake was felt is mapped simply by locating Internet Protocol (IP) addresses during traffic surges. In addition, the presence of eyewitnesses browsing our website within minutes of an earthquake occurrence excludes the possibility of widespread damage in the localities they originate from: in case of severe damage, the networks would be down. The validity of the information derived from this clickstream analysis is confirmed by comparisons with EMS98 macroseismic map obtained from online questionnaires. The name of this approach, "flash-sourcing", is a combination of "flash-crowd" and "crowdsourcing" intending to reflect the rapidity of the data collation from the public. For computer scientists, a flash-crowd names a traffic surge on a website. Crowdsourcing means work being done by a "crowd" of people; It also characterises Internet and mobile applications collecting information from the public such as online macroseismic questionnaires. Like crowdsourcing techniques, flash-sourcing is a

  13. Imaging the Fine-Scale Structure of the San Andreas Fault in the Northern Gabilan Range with Explosion and Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Xin, H.; Thurber, C. H.; Zhang, H.; Wang, F.

    2014-12-01

    A number of geophysical studies have been carried out along the San Andreas Fault (SAF) in the Northern Gabilan Range (NGR) with the purpose of characterizing in detail the fault zone structure. Previous seismic research has revealed the complex structure of the crustal volume in the NGR region in two-dimensions (Thurber et al., 1996, 1997), and there has been some work on the three-dimensional (3D) structure at a coarser scale (Lin and Roecker, 1997). In our study we use earthquake body-wave arrival times and differential times (P and S) and explosion arrival times (only P) to image the 3D P- and S-wave velocity structure of the upper crust along the SAF in the NGR using double-difference (DD) tomography. The earthquake and explosion data types have complementary strengths - the earthquake data have good resolution at depth and resolve both Vp and Vs structure, although only where there are sufficient seismic rays between hypocenter and stations, whereas the explosions contribute very good near-surface resolution but for P waves only. The original dataset analyzed by Thurber et al. (1996, 1997) included data from 77 local earthquakes and 8 explosions. We enlarge the dataset with 114 more earthquakes that occurred in the study area, obtain improved S-wave picks using an automated picker, and include absolute and cross-correlation differential times. The inversion code we use is the algorithm tomoDD (Zhang and Thurber, 2003). We assess how the P and S velocity models and earthquake locations vary as we alter the inversion parameters and the inversion grid. The new inversion results show clearly the fine-scale structure of the SAF at depth in 3D, sharpening the image of the velocity contrast from the southwest side to the northeast side.

  14. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  15. Earthquake location in island arcs

    USGS Publications Warehouse

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  16. Leadership in a Time of Adversity: A Story from the New Zealand Earthquake

    ERIC Educational Resources Information Center

    Hayes, Juliette

    2011-01-01

    At 12.51 p.m. on Tuesday, 22 February, Christchurch and the Canterbury region were hit with a 6.4 magnitude earthquake. While not as strong as the 7.1 magnitude earthquake experienced in September, this one was much more violent in its intensity and also occurred in the middle of a busy working and school day. A hundred and eighty people died in…

  17. Scale-invariant structure of energy fluctuations in real earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Chang, Zhe; Wang, Huanyu; Lu, Hong

    2017-11-01

    Earthquakes are obviously complex phenomena associated with complicated spatiotemporal correlations, and they are generally characterized by two power laws: the Gutenberg-Richter (GR) and the Omori-Utsu laws. However, an important challenge has been to explain two apparently contrasting features: the GR and Omori-Utsu laws are scale-invariant and unaffected by energy or time scales, whereas earthquakes occasionally exhibit a characteristic energy or time scale, such as with asperity events. In this paper, three high-quality datasets on earthquakes were used to calculate the earthquake energy fluctuations at various spatiotemporal scales, and the results reveal the correlations between seismic events regardless of their critical or characteristic features. The probability density functions (PDFs) of the fluctuations exhibit evidence of another scaling that behaves as a q-Gaussian rather than random process. The scaling behaviors are observed for scales spanning three orders of magnitude. Considering the spatial heterogeneities in a real earthquake fault, we propose an inhomogeneous Olami-Feder-Christensen (OFC) model to describe the statistical properties of real earthquakes. The numerical simulations show that the inhomogeneous OFC model shares the same statistical properties with real earthquakes.

  18. Measuring the effectiveness of earthquake forecasting in insurance strategies

    NASA Astrophysics Data System (ADS)

    Mignan, A.; Muir-Wood, R.

    2009-04-01

    Given the difficulty of judging whether the skill of a particular methodology of earthquake forecasts is offset by the inevitable false alarms and missed predictions, it is important to find a means to weigh the successes and failures according to a common currency. Rather than judge subjectively the relative costs and benefits of predictions, we develop a simple method to determine if the use of earthquake forecasts can increase the profitability of active financial risk management strategies employed in standard insurance procedures. Three types of risk management transactions are employed: (1) insurance underwriting, (2) reinsurance purchasing and (3) investment in CAT bonds. For each case premiums are collected based on modelled technical risk costs and losses are modelled for the portfolio in force at the time of the earthquake. A set of predetermined actions follow from the announcement of any change in earthquake hazard, so that, for each earthquake forecaster, the financial performance of an active risk management strategy can be compared with the equivalent passive strategy in which no notice is taken of earthquake forecasts. Overall performance can be tracked through time to determine which strategy gives the best long term financial performance. This will be determined by whether the skill in forecasting the location and timing of a significant earthquake (where loss is avoided) is outweighed by false predictions (when no premium is collected). This methodology is to be tested in California, where catastrophe modeling is reasonably mature and where a number of researchers issue earthquake forecasts.

  19. 88 hours: the U.S. Geological Survey National Earthquake Information Center response to the March 11, 2011 Mw 9.0 Tohoku earthquake

    USGS Publications Warehouse

    Wald, David J.; Hayes, Gavin P.; Benz, Harley M.; Earle, Paul S.; Briggs, Richard W.

    2011-01-01

    The M 9.0 11 March 2011 Tohoku, Japan, earthquake and associated tsunami near the east coast of the island of Honshu caused tens of thousands of deaths and potentially over one trillion dollars in damage, resulting in one of the worst natural disasters ever recorded. The U.S. Geological Survey National Earthquake Information Center (USGS NEIC), through its responsibility to respond to all significant global earthquakes as part of the National Earthquake Hazards Reduction Program, quickly produced and distributed a suite of earthquake information products to inform emergency responders, the public, the media, and the academic community of the earthquake's potential impact and to provide scientific background for the interpretation of the event's tectonic context and potential for future hazard. Here we present a timeline of the NEIC response to this devastating earthquake in the context of rapidly evolving information emanating from the global earthquake-response community. The timeline includes both internal and publicly distributed products, the relative timing of which highlights the inherent tradeoffs between the requirement to provide timely alerts and the necessity for accurate, authoritative information. The timeline also documents the iterative and evolutionary nature of the standard products produced by the NEIC and includes a behind-the-scenes look at the decisions, data, and analysis tools that drive our rapid product distribution.

  20. Eyewitness account of the 1931 great earthquake at Hawke’s Bay, New Zealand

    USGS Publications Warehouse

    Spall, H.

    1984-01-01

    No part of New Zealand is far from a known earthquake origin. The magnitude 7.9 earthquake of 1931 at Hawke's Bay, North Island, on February 3, 1931, was the most serious event recorded in New Zealand hsitory. It was responsible for 256 deaths. The Modified Mercalli intensity reached XI in Napier, a city of 35,000 people, and hence, caused considerable destruction to property. 

  1. The limits of earthquake early warning: Timeliness of ground motion estimates

    USGS Publications Warehouse

    Minson, Sarah E.; Meier, Men-Andrin; Baltay, Annemarie S.; Hanks, Thomas C.; Cochran, Elizabeth S.

    2018-01-01

    The basic physics of earthquakes is such that strong ground motion cannot be expected from an earthquake unless the earthquake itself is very close or has grown to be very large. We use simple seismological relationships to calculate the minimum time that must elapse before such ground motion can be expected at a distance from the earthquake, assuming that the earthquake magnitude is not predictable. Earthquake early warning (EEW) systems are in operation or development for many regions around the world, with the goal of providing enough warning of incoming ground shaking to allow people and automated systems to take protective actions to mitigate losses. However, the question of how much warning time is physically possible for specified levels of ground motion has not been addressed. We consider a zero-latency EEW system to determine possible warning times a user could receive in an ideal case. In this case, the only limitation on warning time is the time required for the earthquake to evolve and the time for strong ground motion to arrive at a user’s location. We find that users who wish to be alerted at lower ground motion thresholds will receive more robust warnings with longer average warning times than users who receive warnings for higher ground motion thresholds. EEW systems have the greatest potential benefit for users willing to take action at relatively low ground motion thresholds, whereas users who set relatively high thresholds for taking action are less likely to receive timely and actionable information.

  2. The limits of earthquake early warning: Timeliness of ground motion estimates

    PubMed Central

    Hanks, Thomas C.

    2018-01-01

    The basic physics of earthquakes is such that strong ground motion cannot be expected from an earthquake unless the earthquake itself is very close or has grown to be very large. We use simple seismological relationships to calculate the minimum time that must elapse before such ground motion can be expected at a distance from the earthquake, assuming that the earthquake magnitude is not predictable. Earthquake early warning (EEW) systems are in operation or development for many regions around the world, with the goal of providing enough warning of incoming ground shaking to allow people and automated systems to take protective actions to mitigate losses. However, the question of how much warning time is physically possible for specified levels of ground motion has not been addressed. We consider a zero-latency EEW system to determine possible warning times a user could receive in an ideal case. In this case, the only limitation on warning time is the time required for the earthquake to evolve and the time for strong ground motion to arrive at a user’s location. We find that users who wish to be alerted at lower ground motion thresholds will receive more robust warnings with longer average warning times than users who receive warnings for higher ground motion thresholds. EEW systems have the greatest potential benefit for users willing to take action at relatively low ground motion thresholds, whereas users who set relatively high thresholds for taking action are less likely to receive timely and actionable information. PMID:29750190

  3. Izmit, Turkey 1999 Earthquake Interferogram

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image is an interferogram that was created using pairs of images taken by Synthetic Aperture Radar (SAR). The images, acquired at two different times, have been combined to measure surface deformation or changes that may have occurred during the time between data acquisition. The images were collected by the European Space Agency's Remote Sensing satellite (ERS-2) on 13 August 1999 and 17 September 1999 and were combined to produce these image maps of the apparent surface deformation, or changes, during and after the 17 August 1999 Izmit, Turkey earthquake. This magnitude 7.6 earthquake was the largest in 60 years in Turkey and caused extensive damage and loss of life. Each of the color contours of the interferogram represents 28 mm (1.1 inches) of motion towards the satellite, or about 70 mm (2.8 inches) of horizontal motion. White areas are outside the SAR image or water of seas and lakes. The North Anatolian Fault that broke during the Izmit earthquake moved more than 2.5 meters (8.1 feet) to produce the pattern measured by the interferogram. Thin red lines show the locations of fault breaks mapped on the surface. The SAR interferogram shows that the deformation and fault slip extended west of the surface faults, underneath the Gulf of Izmit. Thick black lines mark the fault rupture inferred from the SAR data. Scientists are using the SAR interferometry along with other data collected on the ground to estimate the pattern of slip that occurred during the Izmit earthquake. This then used to improve computer models that predict how this deformation transferred stress to other faults and to the continuation of the North Anatolian Fault, which extends to the west past the large city of Istanbul. These models show that the Izmit earthquake further increased the already high probability of a major earthquake near Istanbul.

  4. Brady's Geothermal Field DAS Earthquake Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Feigl

    The submitted data correspond to the vibration caused by a 3.4 M earthquake and captured by the DAS horizontal and vertical arrays during the PoroTomo Experiment. Earthquake information : M 4.3 - 23km ESE of Hawthorne, Nevada Time: 2016-03-21 07:37:10 (UTC) Location: 38.479 N 118.366 W Depth: 9.9 km

  5. Development of regional earthquake early warning and structural health monitoring system and real-time ground motion forecasting using front-site waveform data (Invited)

    NASA Astrophysics Data System (ADS)

    Motosaka, M.

    2009-12-01

    This paper presents firstly, the development of an integrated regional earthquake early warning (EEW) system having on-line structural health monitoring (SHM) function, in Miyagi prefecture, Japan. The system makes it possible to provide more accurate, reliable and immediate earthquake information for society by combining the national (JMA/NIED) EEW system, based on advanced real-time communication technology. The author has planned to install the EEW/SHM system to the public buildings around Sendai, a million city of north-eastern Japan. The system has been so far implemented in two buildings; one is in Sendai, and the other in Oshika, a front site on the Pacific Ocean coast for the approaching Miyagi-ken Oki earthquake. The data from the front-site and the on-site are processed by the analysis system which was installed at the analysis center of Disaster Control Research Center, Tohoku University. The real-time earthquake information from JMA is also received at the analysis center. The utilization of the integrated EEW/SHM system is addressed together with future perspectives. Examples of the obtained data are also described including the amplitude depending dynamic characteristics of the building in Sendai before, during, and after the 2008/6/14 Iwate-Miyagi Nairiku Earthquake, together with the historical change of dynamic characteristics for 40 years. Secondary, this paper presents an advanced methodology based on Artificial Neural Networks (ANN) for forward forecasting of ground motion parameters, not only PGA, PGV, but also Spectral information before S-wave arrival using initial part of P-waveform at a front site. The estimated ground motion information can be used as warning alarm for earthquake damage reduction. The Fourier Amplitude Spectra (FAS) estimated before strong shaking with high accuracy can be used for advanced engineering applications, e.g. feed-forward structural control of a building of interest. The validity and applicability of the method

  6. Normal Fault Type Earthquakes Off Fukushima Region - Comparison of the 1938 Events and Recent Earthquakes -

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Satake, K.

    2017-12-01

    Off Fukushima region, Mjma 7.4 (event A) and 6.9 (event B) events occurred on November 6, 1938, following the thrust fault type earthquakes of Mjma 7.5 and 7.3 on the previous day. These earthquakes were estimated as normal fault earthquakes by Abe (1977, Tectonophysics). An Mjma 7.0 earthquake occurred on July 12, 2014 near event B and an Mjma 7.4 earthquake occurred on November 22, 2016 near event A. These recent events are the only M 7 class earthquakes occurred off Fukushima since 1938. Except for the two 1938 events, normal fault earthquakes have not occurred until many aftershocks of the 2011 Tohoku earthquake. We compared the observed tsunami and seismic waveforms of the 1938, 2014, and 2016 earthquakes to examine the normal fault earthquakes occurred off Fukushima region. It is difficult to compare the tsunami waveforms of the 1938, 2014 and 2016 events because there were only a few observations at the same station. The teleseismic body wave inversion of the 2016 earthquake yielded with the focal mechanism of strike 42°, dip 35°, and rake -94°. Other source parameters were as follows: source area 70 km x 40 km, average slip 0.2 m, maximum slip 1.2 m, seismic moment 2.2 x 1019 Nm, and Mw 6.8. A large slip area is located near the hypocenter, and it is compatible with the tsunami source area estimated from tsunami travel times. The 2016 tsunami source area is smaller than that of the 1938 event, consistent with the difference in Mw: 7.7 for event A estimated by Abe (1977) and 6.8 for the 2016 event. Although the 2014 epicenter is very close to that of event B, the teleseismic waveforms of the 2014 event are similar to those of event A and the 2016 event. While Abe (1977) assumed that the mechanism of event B was the same as event A, the initial motions at some stations are opposite, indicating that the focal mechanisms of events A and B are different and more detailed examination is needed. The normal fault type earthquake seems to occur following the

  7. The Collapse of Ancient Societies by Great Earthquakes

    NASA Astrophysics Data System (ADS)

    Nur, A. M.

    2001-12-01

    Although earthquakes have often been associated with inexplicable past societal disasters their impact has thought to be only secondary for two reasons: Inconclusive archaeological interpretation of excavated destruction, and misconceptions about patterns of seismicity. However, new and revised archaeological evidence and a better understanding of the irregularities of the time-space patterns of large earthquakes together suggest that earthquakes (and associated tsunamis) have probably been responsible for some of the great and enigmatic catastrophes in ancient times. The most relevant aspect of seismicity is the episodic time-space clustering of earthquakes such as during the eastern Mediterranean seismic crisis in the second half of the 4th century AD and the seismicity of the north Anatolian fault during our century. During these earthquake clusters, plate boundary rupture by a series of large earthquakes that occur over a period of only 50 to 100 years or so, followed by hundreds or even thousands of years of relative inactivity. The extent of the destruction by such rare but powerful earthquake clusters must have been far greater than similar modern events due to poorer construction and the lack of any earthquake preparedness in ancient times. The destruction by very big earthquakes also made ancient societies so vulnerable because so much of the wealth and power was concentrated and protected by so few. Thus the breaching by an earthquake of the elite's fortified cities must have often led to attacks by (1) external enemies during ongoing wars (e.g., Joshua and Jericho, Arab attack on Herod's Jerusalem in 31 BCE); (2) neighbors during ongoing conflicts (e.g., Mycenea's fall in @1200 BCE, Saul's battle at Michmash @1020 BCE); and (3) uprisings of poor and often enslaved indigenous populations (e.g., Sparta and the Helots @465 BCE, Hattusas @1200 BCE?, Teotihuacan @ 700 AD). When the devastation was by a local earthquake, during a modest conflict, damage was

  8. Global observation of Omori-law decay in the rate of triggered earthquakes

    NASA Astrophysics Data System (ADS)

    Parsons, T.

    2001-12-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

  9. Distributed Fiber Optic Sensors for Earthquake Detection and Early Warning

    NASA Astrophysics Data System (ADS)

    Karrenbach, M. H.; Cole, S.

    2016-12-01

    Fiber optic cables placed along pipelines, roads or other infrastructure provide dense sampling of passing seismic wavefields. Laser interrogation units illuminate the fiber over its entire length, and strain at desired points along the fiber can be determined from the reflected signal. Single-mode optical fibers up to 50 km in length can provide a distributed acoustic sensing system (DAS) where the acoustic bandwidth of each channel is limited only by the round-trip time over the length of the cable (0.0005 s for a 50 km cable). Using a 10 m spatial resolution results in 4000 channels sampled at 2.5 kHz spanning a 40 km-long fiber deployed along a pipeline. The inline strain field is averaged along the fiber over a 10 m section of the cable at each desired spatial sample, creating a virtual sensor location. Typically, a dynamic strain sensitivity of sub-nanometers within each gauge along the entire length of the fiber can be achieved. This sensitivity corresponds to a particle displacement figure of approximately -90 dB ms-2Hz-½. Such a fiber optic sensor is not as sensitive as long-period seismometers used in earthquake networks, but given the large number of channels, small to medium-sized earthquakes can be detected, depending on distance from the array, and can be located with precision through arrival time inversions. We show several examples of earthquake recordings using distributed fiber optic arrays that were deployed originally for other purposes. A 480 km long section of a pipeline in Turkey was actively monitored with a DAS fiber optic system for activities in the immediate vicinity of the pipeline. The densely spaced sensor array along the pipeline detected earthquakes of 3.6 - 7.2 magnitude range, centered near Van, Turkey. Secondly, a fiber optic system located along a rail line near the Salton Sea in California was used to create a smaller scale fiber optic sensor array, on which earthquakes with magnitudes 2.2 - 2.7 were recorded from epicenters

  10. Adaptive vibration control of structures under earthquakes

    NASA Astrophysics Data System (ADS)

    Lew, Jiann-Shiun; Juang, Jer-Nan; Loh, Chin-Hsiung

    2017-04-01

    techniques, for structural vibration suppression under earthquakes. Various control strategies have been developed to protect structures from natural hazards and improve the comfort of occupants in buildings. However, there has been little development of adaptive building control with the integration of real-time system identification and control design. Generalized predictive control, which combines the process of real-time system identification and the process of predictive control design, has received widespread acceptance and has been successfully applied to various test-beds. This paper presents a formulation of the predictive control scheme for adaptive vibration control of structures under earthquakes. Comprehensive simulations are performed to demonstrate and validate the proposed adaptive control technique for earthquake-induced vibration of a building.

  11. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  12. Urban Earthquakes - Reducing Building Collapse Through Education

    NASA Astrophysics Data System (ADS)

    Bilham, R.

    2004-12-01

    Fatalities from earthquakes rose from 6000k to 9000k/year in the past decade, yet the ratio of numbers of earthquake fatalities to instantaneous population continues to fall. Since 1950 the ratio declined worldwide by a factor of three, but in some countries the ratio has changed little. E.g in Iran, 1 in 3000 people can expect to die in an earthquake, a percentage that has not changed significantly since 1890. Fatalities from earthquakes remain high in those countries that have traditionally suffered from frequent large earthquakes (Turkey, Iran, Japan, and China), suggesting that the exposure time of recently increased urban populations in other countries may be too short to have interacted with earthquakes with long recurrence intervals. This in turn, suggests that disasters of unprecendented size will occur (more than 1 million fatalities) when future large earthquakes occur close to megacities. However, population growth is most rapid in cities of less than 1 million people in the developing nations, where the financial ability to implement earthquake resistant construction methods is limited. In that structural collapse can often be traced to ignorance about the forces at work in an earthquake, the future collapse of buildings presently under construction could be much reduced were contractors, builders and occupants educated in the principles of earthquake resistant assembly. Education of builders who are tempted to cut assembly costs is likely to be more cost effective than material aid.

  13. Geodetic slip rate for the eastern California shear zone and the recurrence time of Mojave desert earthquakes

    USGS Publications Warehouse

    Sauber, J.; Thatcher, W.; Solomon, S.C.; Lisowski, M.

    1994-01-01

    Where the San Andreas fault passes along the southwestern margin of the Mojave desert, it exhibits a large change in trend, and the deformation associated with the Pacific/North American plate boundary is distributed broadly over a complex shear zone. The importance of understanding the partitioning of strain across this region, especially to the east of the Mojave segment of the San Andreas in a region known as the eastern California shear zone (ECSZ), was highlighted by the occurrence (on 28 June 1992) of the magnitude 7.3 Landers earthquake in this zone. Here we use geodetic observations in the central Mojave desert to obtain new estimates for the rate and distribution of strain across a segment of the ECSZ, and to determine a coseismic strain drop of ~770 ??rad for the Landers earthquake. From these results we infer a strain energy recharge time of 3,500-5,000 yr for a Landers-type earthquake and a slip rate of ~12 mm yr-1 across the faults of the central Mojave. The latter estimate implies that a greater fraction of plate motion than heretofore inferred from geodetic data is accommodated across the ECSZ.

  14. Sediment gravity flows triggered by remotely generated earthquake waves

    NASA Astrophysics Data System (ADS)

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan L.; Salmi, Marie S.

    2017-06-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011-2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  15. Sediment gravity flows triggered by remotely generated earthquake waves

    USGS Publications Warehouse

    Johnson, H. Paul; Gomberg, Joan S.; Hautala, Susan; Salmi, Marie

    2017-01-01

    Recent great earthquakes and tsunamis around the world have heightened awareness of the inevitability of similar events occurring within the Cascadia Subduction Zone of the Pacific Northwest. We analyzed seafloor temperature, pressure, and seismic signals, and video stills of sediment-enveloped instruments recorded during the 2011–2015 Cascadia Initiative experiment, and seafloor morphology. Our results led us to suggest that thick accretionary prism sediments amplified and extended seismic wave durations from the 11 April 2012 Mw8.6 Indian Ocean earthquake, located more than 13,500 km away. These waves triggered a sequence of small slope failures on the Cascadia margin that led to sediment gravity flows culminating in turbidity currents. Previous studies have related the triggering of sediment-laden gravity flows and turbidite deposition to local earthquakes, but this is the first study in which the originating seismic event is extremely distant (> 10,000 km). The possibility of remotely triggered slope failures that generate sediment-laden gravity flows should be considered in inferences of recurrence intervals of past great Cascadia earthquakes from turbidite sequences. Future similar studies may provide new understanding of submarine slope failures and turbidity currents and the hazards they pose to seafloor infrastructure and tsunami generation in regions both with and without local earthquakes.

  16. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    USGS Publications Warehouse

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  17. Limits on great earthquake size at subduction zones

    NASA Astrophysics Data System (ADS)

    McCaffrey, R.

    2012-12-01

    Subduction zones are where the world's greatest earthquakes occur due to the large fault area available to slip. Yet some subduction zones are thought to be immune from these massive events, where quake size is limited by some physical processes or properties. Accordingly, the size of the 2011 Tohoku-oki Mw 9.0 earthquake caught some in the earthquake research community by surprise. The expectations of these massive quakes have been driven in the past by reliance on our short, incomplete history of earthquakes and causal relationships derived from it. The logic applied is that if a great earthquake has not happened in the past, that we know of, one cannot happen in the future. Using the ~100-year global earthquake seismological history, and in some cases extended with geologic observations, relationships between maximum earthquake sizes and other properties of subduction zones are suggested, leading to the notion that some subduction zones, like the Japan Trench, would never produce a magnitude ~9 event. Empirical correlations of earthquake behavior with other subduction parameters can give false positive results when the data are incomplete or incorrect, of small numbers and numerous attributes are examined. Given multi-century return times of the greatest earthquakes, ignorance of those return times and our relatively limited temporal observation span (in most places), I suggest that we cannot yet rule out great earthquakes at any subduction zones. Alternatively, using the length of a subduction zone that is available for slip as the predominant factor in determining maximum earthquake size, we cannot rule out that any subduction zone of a few hundred kilometers or more in length may be capable of producing a magnitude 9 or larger earthquake. Based on this method, the expected maximum size for the Japan Trench was 9.0 (McCaffrey, Geology, p. 263, 2008). The same approach indicates that a M > 9 off Java, with twice the population density as Honshu and much lower

  18. An interdisciplinary approach for earthquake modelling and forecasting

    NASA Astrophysics Data System (ADS)

    Han, P.; Zhuang, J.; Hattori, K.; Ogata, Y.

    2016-12-01

    Earthquake is one of the most serious disasters, which may cause heavy casualties and economic losses. Especially in the past two decades, huge/mega earthquakes have hit many countries. Effective earthquake forecasting (including time, location, and magnitude) becomes extremely important and urgent. To date, various heuristically derived algorithms have been developed for forecasting earthquakes. Generally, they can be classified into two types: catalog-based approaches and non-catalog-based approaches. Thanks to the rapid development of statistical seismology in the past 30 years, now we are able to evaluate the performances of these earthquake forecast approaches quantitatively. Although a certain amount of precursory information is available in both earthquake catalogs and non-catalog observations, the earthquake forecast is still far from satisfactory. In most case, the precursory phenomena were studied individually. An earthquake model that combines self-exciting and mutually exciting elements was developed by Ogata and Utsu from the Hawkes process. The core idea of this combined model is that the status of the event at present is controlled by the event itself (self-exciting) and all the external factors (mutually exciting) in the past. In essence, the conditional intensity function is a time-varying Poisson process with rate λ(t), which is composed of the background rate, the self-exciting term (the information from past seismic events), and the external excitation term (the information from past non-seismic observations). This model shows us a way to integrate the catalog-based forecast and non-catalog-based forecast. Against this background, we are trying to develop a new earthquake forecast model which combines catalog-based and non-catalog-based approaches.

  19. The enigmatic Bala earthquake of 1974

    NASA Astrophysics Data System (ADS)

    Musson, R. M. W.

    2006-10-01

    The earthquake that shook most of North Wales on the night of 23 January 1974 appears unremarkable from its entry in the UK earthquake catalogue. With a magnitude of 3.5 ML it represents the size of earthquake to be expected in the UK with a return period of about one year. However, the prominent atmospheric lights observed at the time of the shock led to speculation that an aircraft had crashed, and search-and-rescue teams were deployed. Since nothing was discovered, it was concluded that a meteorite was responsible; more imaginative members of the public decided (and still believe) that a UFO had crashed. In this paper the record of events is set out, and the nature of the earthquake is discussed with reference to its geological setting.

  20. Critical behavior in earthquake energy dissipation

    NASA Astrophysics Data System (ADS)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.