Sample records for earthquake dbe parameters

  1. 49 CFR 26.55 - How is DBE participation counted toward goals?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... supplies and materials obtained by the DBE for the work of the contract, including supplies purchased or equipment leased by the DBE (except supplies and equipment the DBE subcontractor purchases or leases from... performance of the work, and other relevant factors. (2) A DBE does not perform a commercially useful function...

  2. 49 CFR 26.55 - How is DBE participation counted toward goals?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... by paragraph (a)(2) of this section) that is performed by the DBE's own forces. Include the cost of...-assisted contract, toward DBE goals, provided you determine the fee to be reasonable and not excessive as... work of the contract that the DBE performs with its own forces toward DBE goals. (c) Count expenditures...

  3. 49 CFR 26.55 - How is DBE participation counted toward goals?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... by paragraph (a)(2) of this section) that is performed by the DBE's own forces. Include the cost of...-assisted contract, toward DBE goals, provided you determine the fee to be reasonable and not excessive as... work of the contract that the DBE performs with its own forces toward DBE goals. (c) Count expenditures...

  4. 49 CFR 26.55 - How is DBE participation counted toward goals?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... by paragraph (a)(2) of this section) that is performed by the DBE's own forces. Include the cost of...-assisted contract, toward DBE goals, provided you determine the fee to be reasonable and not excessive as... work of the contract that the DBE performs with its own forces toward DBE goals. (c) Count expenditures...

  5. 49 CFR 26.55 - How is DBE participation counted toward goals?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... by paragraph (a)(2) of this section) that is performed by the DBE's own forces. Include the cost of...-assisted contract, toward DBE goals, provided you determine the fee to be reasonable and not excessive as... work of the contract that the DBE performs with its own forces toward DBE goals. (c) Count expenditures...

  6. Local tsunamis and earthquake source parameters

    USGS Publications Warehouse

    Geist, Eric L.; Dmowska, Renata; Saltzman, Barry

    1999-01-01

    This chapter establishes the relationship among earthquake source parameters and the generation, propagation, and run-up of local tsunamis. In general terms, displacement of the seafloor during the earthquake rupture is modeled using the elastic dislocation theory for which the displacement field is dependent on the slip distribution, fault geometry, and the elastic response and properties of the medium. Specifically, nonlinear long-wave theory governs the propagation and run-up of tsunamis. A parametric study is devised to examine the relative importance of individual earthquake source parameters on local tsunamis, because the physics that describes tsunamis from generation through run-up is complex. Analysis of the source parameters of various tsunamigenic earthquakes have indicated that the details of the earthquake source, namely, nonuniform distribution of slip along the fault plane, have a significant effect on the local tsunami run-up. Numerical methods have been developed to address the realistic bathymetric and shoreline conditions. The accuracy of determining the run-up on shore is directly dependent on the source parameters of the earthquake, which provide the initial conditions used for the hydrodynamic models.

  7. Integrating data from biological experiments into metabolic networks with the DBE information system.

    PubMed

    Borisjuk, Ljudmilla; Hajirezaei, Mohammad-Reza; Klukas, Christian; Rolletschek, Hardy; Schreiber, Falk

    2005-01-01

    Modern 'omics'-technologies result in huge amounts of data about life processes. For analysis and data mining purposes this data has to be considered in the context of the underlying biological networks. This work presents an approach for integrating data from biological experiments into metabolic networks by mapping the data onto network elements and visualising the data enriched networks automatically. This methodology is implemented in DBE, an information system that supports the analysis and visualisation of experimental data in the context of metabolic networks. It consists of five parts: (1) the DBE-Database for consistent data storage, (2) the Excel-Importer application for the data import, (3) the DBE-Website as the interface for the system, (4) the DBE-Pictures application for the up- and download of binary (e. g. image) files, and (5) DBE-Gravisto, a network analysis and graph visualisation system. The usability of this approach is demonstrated in two examples.

  8. Examining the disadvantaged business enterprise (DBE) program.

    DOT National Transportation Integrated Search

    2012-09-01

    U.S. Department of Transportation (US DOT) Disadvantaged Business Enterprise (DBE) program : regulations require recipients of US DOT financial assistance, namely, state and local transportation : agencies, to establish goals for the participation of...

  9. Earthquake damage orientation to infer seismic parameters in archaeological sites and historical earthquakes

    NASA Astrophysics Data System (ADS)

    Martín-González, Fidel

    2018-01-01

    Studies to provide information concerning seismic parameters and seismic sources of historical and archaeological seismic events are used to better evaluate the seismic hazard of a region. This is of especial interest when no surface rupture is recorded or the seismogenic fault cannot be identified. The orientation pattern of the earthquake damage (ED) (e.g., fallen columns, dropped key stones) that affected architectonic elements of cities after earthquakes has been traditionally used in historical and archaeoseismological studies to infer seismic parameters. However, in the literature depending on the authors, the parameters that can be obtained are contradictory (it has been proposed: the epicenter location, the orientation of the P-waves, the orientation of the compressional strain and the fault kinematics) and authors even question these relations with the earthquake damage. The earthquakes of Lorca in 2011, Christchurch in 2011 and Emilia Romagna in 2012 present an opportunity to measure systematically a large number and wide variety of earthquake damage in historical buildings (the same structures that are used in historical and archaeological studies). The damage pattern orientation has been compared with modern instrumental data, which is not possible in historical and archaeoseismological studies. From measurements and quantification of the orientation patterns in the studied earthquakes, it is observed that there is a systematic pattern of the earthquake damage orientation (EDO) in the proximity of the seismic source (fault trace) (<10 km). The EDO in these earthquakes is normal to the fault trend (±15°). This orientation can be generated by a pulse of motion that in the near fault region has a distinguishable acceleration normal to the fault due to the polarization of the S-waves. Therefore, the earthquake damage orientation could be used to estimate the seismogenic fault trend of historical earthquakes studies where no instrumental data are available.

  10. Relating stick-slip friction experiments to earthquake source parameters

    USGS Publications Warehouse

    McGarr, Arthur F.

    2012-01-01

    Analytical results for parameters, such as static stress drop, for stick-slip friction experiments, with arbitrary input parameters, can be determined by solving an energy-balance equation. These results can then be related to a given earthquake based on its seismic moment and the maximum slip within its rupture zone, assuming that the rupture process entails the same physics as stick-slip friction. This analysis yields overshoots and ratios of apparent stress to static stress drop of about 0.25. The inferred earthquake source parameters static stress drop, apparent stress, slip rate, and radiated energy are robust inasmuch as they are largely independent of the experimental parameters used in their estimation. Instead, these earthquake parameters depend on C, the ratio of maximum slip to the cube root of the seismic moment. C is controlled by the normal stress applied to the rupture plane and the difference between the static and dynamic coefficients of friction. Estimating yield stress and seismic efficiency using the same procedure is only possible when the actual static and dynamic coefficients of friction are known within the earthquake rupture zone.

  11. Associating an ionospheric parameter with major earthquake occurrence throughout the world

    NASA Astrophysics Data System (ADS)

    Ghosh, D.; Midya, S. K.

    2014-02-01

    With time, ionospheric variation analysis is gaining over lithospheric monitoring in serving precursors for earthquake forecast. The current paper highlights the association of major (Ms ≥ 6.0) and medium (4.0 ≤ Ms < 6.0) earthquake occurrences throughout the world in different ranges of the Ionospheric Earthquake Parameter (IEP) where `Ms' is earthquake magnitude on the Richter scale. From statistical and graphical analyses, it is concluded that the probability of earthquake occurrence is maximum when the defined parameter lies within the range of 0-75 (lower range). In the higher ranges, earthquake occurrence probability gradually decreases. A probable explanation is also suggested.

  12. 49 CFR 26.35 - What role do business development and mentor-protégé programs have in the DBE program?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What role do business development and mentor-protÃ... What role do business development and mentor-protégé programs have in the DBE program? (a) You may or... BDP or separately, you may establish a “mentor-protégé” program, in which another DBE or non-DBE firm...

  13. Testing new methodologies for short -term earthquake forecasting: Multi-parameters precursors

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Tramutoli, Valerio; Lee, Lou; Liu, Tiger; Hattori, Katsumi; Kafatos, Menas

    2014-05-01

    We are conducting real-time tests involving multi-parameter observations over different seismo-tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters, namely: gas discharge; thermal infrared radiation; ionospheric electron density; and atmospheric temperature and humidity, which we believe are all associated with the earthquake preparation phase. We are testing a methodology capable to produce alerts in advance of major earthquakes (M > 5.5) in different regions of active earthquakes and volcanoes. During 2012-2013 we established a collaborative framework with PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) projects for coordinated measurements and prospective validation over seven testing regions: Southern California (USA), Eastern Honshu (Japan), Italy, Greece, Turkey, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a "stress test" opportunity to validate the physical based earthquake precursor approach over regions of high seismicity. Our initial results are: (1) Real-time tests have shown the presence of anomalies in the atmosphere and ionosphere before most of the significant (M>5.5) earthquakes; (2) False positives exist and ratios are different for each region, varying between 50% for (Southern Italy), 35% (California) down to 25% (Taiwan, Kamchatka and Japan) with a significant reduction of false positives as soon as at least two geophysical parameters are contemporarily used; (3) Main problems remain related to the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that real-time testing of physically based pre-earthquake signals provides a short-term predictive power (in all three important parameters, namely location, time and magnitude) for the occurrence of major earthquakes in the tested regions and this result encourages testing to continue with a more detailed analysis of

  14. 49 CFR Appendix B to Part 26 - Uniform Report of DBE Awards or Commitments and Payments Form

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false Uniform Report of DBE Awards or Commitments and Payments Form B Appendix B to Part 26 Transportation Office of the Secretary of Transportation... PROGRAMS Pt. 26, App. B Appendix B to Part 26—Uniform Report of DBE Awards or Commitments and Payments Form...

  15. Methodology to determine the parameters of historical earthquakes in China

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Lin, Guoliang; Zhang, Zhe

    2017-12-01

    China is one of the countries with the longest cultural tradition. Meanwhile, China has been suffering very heavy earthquake disasters; so, there are abundant earthquake recordings. In this paper, we try to sketch out historical earthquake sources and research achievements in China. We will introduce some basic information about the collections of historical earthquake sources, establishing intensity scale and the editions of historical earthquake catalogues. Spatial-temporal and magnitude distributions of historical earthquake are analyzed briefly. Besides traditional methods, we also illustrate a new approach to amend the parameters of historical earthquakes or even identify candidate zones for large historical or palaeo-earthquakes. In the new method, a relationship between instrumentally recorded small earthquakes and strong historical earthquakes is built up. Abundant historical earthquake sources and the achievements of historical earthquake research in China are of valuable cultural heritage in the world.

  16. Effective utilization of disadvantaged business enterprises (DBE) in alternative delivery projects: strategies and resources to support the achievement of DBE goals : Georgia DOT research project 14-42 : final report.

    DOT National Transportation Integrated Search

    2016-04-01

    This synthesis is a comprehensive review of the best knowledge and practices for stimulating effective : utilization of Disadvantaged Business Enterprises (DBE) in the procurement and execution of : transportation projects, using design-build and oth...

  17. Data mining of atmospheric parameters associated with coastal earthquakes

    NASA Astrophysics Data System (ADS)

    Cervone, Guido

    Earthquakes are natural hazards that pose a serious threat to society and the environment. A single earthquake can claim thousands of lives, cause damages for billions of dollars, destroy natural landmarks and render large territories uninhabitable. Studying earthquakes and the processes that govern their occurrence, is of fundamental importance to protect lives, properties and the environment. Recent studies have shown that anomalous changes in land, ocean and atmospheric parameters occur prior to earthquakes. The present dissertation introduces an innovative methodology and its implementation to identify anomalous changes in atmospheric parameters associated with large coastal earthquakes. Possible geophysical mechanisms are discussed in view of the close interaction between the lithosphere, the hydrosphere and the atmosphere. The proposed methodology is a multi strategy data mining approach which combines wavelet transformations, evolutionary algorithms, and statistical analysis of atmospheric data to analyze possible precursory signals. One dimensional wavelet transformations and statistical tests are employed to identify significant singularities in the data, which may correspond to anomalous peaks due to the earthquake preparatory processes. Evolutionary algorithms and other localized search strategies are used to analyze the spatial and temporal continuity of the anomalies detected over a large area (about 2000 km2), to discriminate signals that are most likely associated with earthquakes from those due to other, mostly atmospheric, phenomena. Only statistically significant singularities occurring within a very short time of each other, and which tract a rigorous geometrical path related to the geological properties of the epicentral area, are considered to be associated with a seismic event. A program called CQuake was developed to implement and validate the proposed methodology. CQuake is a fully automated, real time semi-operational system, developed to

  18. Earthquake Source Parameters Inferred from T-Wave Observations

    NASA Astrophysics Data System (ADS)

    Perrot, J.; Dziak, R.; Lau, T. A.; Matsumoto, H.; Goslin, J.

    2004-12-01

    The seismicity of the North Atlantic Ocean has been recorded by two networks of autonomous hydrophones moored within the SOFAR channel on the flanks of the Mid-Atlantic Ridge (MAR). In February 1999, a consortium of U.S. investigators (NSF and NOAA) deployed a 6-element hydrophone array for long-term monitoring of MAR seismicity between 15o-35oN south of the Azores. In May 2002, an international collaboration of French, Portuguese, and U.S. researchers deployed a 6-element hydrophone array north of the Azores Plateau from 40o-50oN. The northern network (referred to as SIRENA) was recovered in September 2003. The low attenuation properties of the SOFAR channel for earthquake T-wave propagation results in a detection threshold reduction from a magnitude completeness level (Mc) of ˜ 4.7 for MAR events recorded by the land-based seismic networks to Mc=3.0 using hydrophone arrays. Detailed focal depth and mechanism information, however, remain elusive due to the complexities of seismo-acoustic propagation paths. Nonetheless, recent analyses (Dziak, 2001; Park and Odom, 2001) indicate fault parameter information is contained within the T-wave signal packet. We investigate this relationship further by comparing an earthquake's T-wave duration and acoustic energy to seismic magnitude (NEIC) and radiation pattern (for events M>5) from the Harvard moment-tensor catalog. First results show earthquake energy is well represented by the acoustic energy of the T-waves, however T-wave codas are significantly influenced by acoustic propagation effects and do not allow a direct determination of the seismic magnitude of the earthquakes. Second, there appears to be a correlation between T-wave acoustic energy, azimuth from earthquake source to the hydrophone, and the radiation pattern of the earthquake's SH waves. These preliminary results indicate there is a relationship between the T-wave observations and earthquake source parameters, allowing for additional insights into T

  19. The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.

    2011-12-01

    Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain

  20. 49 CFR 26.27 - What efforts must recipients make concerning DBE financial institutions?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... financial institutions? 26.27 Section 26.27 Transportation Office of the Secretary of Transportation... efforts must recipients make concerning DBE financial institutions? You must thoroughly investigate the full extent of services offered by financial institutions owned and controlled by socially and...

  1. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the

  2. Multi-parameter Observations and Validation of Pre-earthquake Atmospheric Signals

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Mogi, T.; Kafatos, M.

    2014-12-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. We are exploring the potential of pre-seismic atmospheric and ionospheric signals to alert for large earthquakes. To achieve this, we start validating anomalous ionospheric /atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (OLR), electron concentration in the ionosphere (GPS/TEC), VHF-bands radio waves, radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show OLR anomalous behavior before all of these events with no false negatives. False alarm ratio for false positives is less then 25%. (2) Prospective testing using multiple parameters with potential for M5.5+ events. The initial testing shows systematic appearance of atmospheric anomalies in advance (days) to the M5.5+ events for Taiwan and Japan (Honshu and Hokkaido areas). Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be further studied and tested for advancing the multi-sensors detection of pre-earthquake atmospheric signals.

  3. Assessment of precast beam-column using capacity demand response spectrum subject to design basis earthquake and maximum considered earthquake

    NASA Astrophysics Data System (ADS)

    Ghani, Kay Dora Abd.; Tukiar, Mohd Azuan; Hamid, Nor Hayati Abdul

    2017-08-01

    Malaysia is surrounded by the tectonic feature of the Sumatera area which consists of two seismically active inter-plate boundaries, namely the Indo-Australian and the Eurasian Plates on the west and the Philippine Plates on the east. Hence, Malaysia experiences tremors from far distant earthquake occurring in Banda Aceh, Nias Island, Padang and other parts of Sumatera Indonesia. In order to predict the safety of precast buildings in Malaysia under near field ground motion the response spectrum analysis could be used for dealing with future earthquake whose specific nature is unknown. This paper aimed to develop of capacity demand response spectrum subject to Design Basis Earthquake (DBE) and Maximum Considered Earthquake (MCE) in order to assess the performance of precast beam column joint. From the capacity-demand response spectrum analysis, it can be concluded that the precast beam-column joints would not survive when subjected to earthquake excitation with surface-wave magnitude, Mw, of more than 5.5 Scale Richter (Type 1 spectra). This means that the beam-column joint which was designed using the current code of practice (BS8110) would be severely damaged when subjected to high earthquake excitation. The capacity-demand response spectrum analysis also shows that the precast beam-column joints in the prototype studied would be severely damaged when subjected to Maximum Considered Earthquake (MCE) with PGA=0.22g having a surface-wave magnitude of more than 5.5 Scale Richter, or Type 1 spectra.

  4. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  5. Simulating and analyzing engineering parameters of Kyushu Earthquake, Japan, 1997, by empirical Green function method

    NASA Astrophysics Data System (ADS)

    Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei

    2017-03-01

    Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.

  6. Anomalous Variation in GPS TEC, Land and Ocean Parameters Prior to 3 Earthquakes

    NASA Astrophysics Data System (ADS)

    Yadav, Kunvar; Karia, Sheetal P.; Pathak, Kamlesh N.

    2016-02-01

    The present study reports the analysis of GPS TEC prior to 3 earthquakes ( M > 6.0). The earthquakes are: (1) Loyalty Island (22°36'S, 170°54'E) on 19 January 2009 ( M = 6.6), (2) Samoa Island (15°29'S, 172°5'W) on 30 August 2009 ( M = 6.6), and (3) Tohoku (38°19'N, 142°22'E) on 11 March 2011 ( M = 9.0). In an effort to search for a precursory signature we analysed the land and ocean parameters prior to the earthquakes, namely SLHF (Land) and SST (Ocean). The GPS TEC data indicate an anomalous behaviour from 1-13 days prior to earthquakes. The main purpose of this study was to explore and demonstrate the possibility of any changes in TEC, SST, and SLHF before, during and after the earthquakes which occurred near or beneath an ocean. This study may lead to better understanding of response of land, ocean, and ionosphere parameters prior to seismic activities.

  7. Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Chen, W.; Ni, S.; Wang, Z.

    2011-12-01

    In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.

  8. Fault parameters and macroseismic observations of the May 10, 1997 Ardekul-Ghaen earthquake

    NASA Astrophysics Data System (ADS)

    Amini, H.; Zare, M.; Ansari, A.

    2018-01-01

    The Ardekul (Zirkuh) earthquake (May 10, 1997) is the largest recent earthquake that occurred in the Ardekul-Ghaen region of Eastern Iran. The greatest destruction was concentrated around Ardekul, Haji-Abad, Esfargh, Pishbar, Bashiran, Abiz-Qadim, and Fakhr-Abad (completely destroyed). The total surface fault rupture was about 125 km with the longest un-interrupted segment in the south of the region. The maximum horizontal and vertical displacements were reported in Korizan and Bohn-Abad with about 210 and 70 cm, respectively; moreover, other building damages and environmental effects were also reported for this earthquake. In this study, the intensity value XI on the European Macroseismic Scale (EMS) and Environmental Seismic Intensity (ESI) scale was selected for this earthquake according to the maximum effects on macroseismic data points affected by this earthquake. Then, according to its macroseismic data points of this earthquake and Boxer code, some macroseismic parameters including magnitude, location, source dimension, and orientation of this earthquake were also estimated at 7.3, 33.52° N-59.99° E, 75 km long and 21 km wide, and 152°, respectively. As the estimated macroseismic parameters are consistent with the instrumental ones (Global Centroid Moment Tensor (GCMT) location and magnitude equal 33.58° N-60.02° E, and 7.2, respectively), this method and dataset are suggested not only for other instrumental earthquakes, but also for historical events.

  9. Minimum of the order parameter fluctuations of seismicity before major earthquakes in Japan.

    PubMed

    Sarlis, Nicholas V; Skordas, Efthimios S; Varotsos, Panayiotis A; Nagao, Toshiyasu; Kamogawa, Masashi; Tanaka, Haruo; Uyeda, Seiya

    2013-08-20

    It has been shown that some dynamic features hidden in the time series of complex systems can be uncovered if we analyze them in a time domain called natural time χ. The order parameter of seismicity introduced in this time domain is the variance of χ weighted for normalized energy of each earthquake. Here, we analyze the Japan seismic catalog in natural time from January 1, 1984 to March 11, 2011, the day of the M9 Tohoku earthquake, by considering a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. We find that the fluctuations of the order parameter of seismicity exhibit distinct minima a few months before all of the shallow earthquakes of magnitude 7.6 or larger that occurred during this 27-y period in the Japanese area. Among the minima, the minimum before the M9 Tohoku earthquake was the deepest. It appears that there are two kinds of minima, namely precursory and nonprecursory, to large earthquakes.

  10. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  11. Updated determination of stress parameters for nine well-recorded earthquakes in eastern North America

    USGS Publications Warehouse

    Boore, David M.

    2012-01-01

    Stress parameters (Δσ) are determined for nine relatively well-recorded earthquakes in eastern North America for ten attenuation models. This is an update of a previous study by Boore et al. (2010). New to this paper are observations from the 2010 Val des Bois earthquake, additional observations for the 1988 Saguenay and 2005 Riviere du Loup earthquakes, and consideration of six attenuation models in addition to the four used in the previous study. As in that study, it is clear that Δσ depends strongly on the rate of geometrical spreading (as well as other model parameters). The observations necessary to determine conclusively which attenuation model best fits the data are still lacking. At this time, a simple 1/R model seems to give as good an overall fit to the data as more complex models.

  12. Source parameters and tectonic interpretation of recent earthquakes (1995 1997) in the Pannonian basin

    NASA Astrophysics Data System (ADS)

    Badawy, Ahmed; Horváth, Frank; Tóth, László

    2001-01-01

    From January 1995 to December 1997, about 74 earthquakes were located in the Pannonian basin and digitally recorded by a recently established network of seismological stations in Hungary. On reviewing the notable events, about 12 earthquakes were reported as felt with maximum intensity varying between 4 and 6 MSK. The dynamic source parameters of these earthquakes have been derived from P-wave displacement spectra. The displacement source spectra obtained are characterised by relatively small values of corner frequency ( f0) ranging between 2.5 and 10 Hz. The seismic moments change from 1.48×10 20 to 1.3×10 23 dyne cm, stress drops from 0.25 to 76.75 bar, fault length from 0.42 to 1.7 km and relative displacement from 0.05 to 15.35 cm. The estimated source parameters suggest a good agreement with the scaling law for small earthquakes. The small values of stress drops in the studied earthquakes can be attributed to the low strength of crustal materials in the Pannonian basin. However, the values of stress drops are not different for earthquake with thrust or normal faulting focal mechanism solutions. It can be speculated that an increase of the seismic activity in the Pannonian basin can be predicted in the long run because extensional development ceased and structural inversion is in progress. Seismic hazard assessment is a delicate job due to the inadequate knowledge of the seismo-active faults, particularly in the interior part of the Pannonian basin.

  13. Extension of the energy-to-moment parameter Θ to intermediate and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Saloor, Nooshin; Okal, Emile A.

    2018-01-01

    We extend to intermediate and deep earthquakes the slowness parameter Θ originally introduced by Newman and Okal (1998). Because of the increasing time lag with depth between the phases P, pP and sP, and of variations in anelastic attenuation parameters t∗ , we define four depth bins featuring slightly different algorithms for the computation of Θ . We apply this methodology to a global dataset of 598 intermediate and deep earthquakes with moments greater than 1025 dyn∗cm. We find a slight increase with depth in average values of Θ (from -4.81 between 80 and 135 km to -4.48 between 450 and 700 km), which however all have intersecting one- σ bands. With widths ranging from 0.26 to 0.31 logarithmic units, these are narrower than their counterpart for a reference dataset of 146 shallow earthquakes (σ = 0.55). Similarly, we find no correlation between values of Θ and focal geometry. These results point to stress conditions within the seismogenic zones inside the Wadati-Benioff slabs more homogeneous than those prevailing at the shallow contacts between tectonic plates.

  14. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

  15. Earthquake Parameters Inferred from the Hoping River Pseudotachylyte, Taiwan

    NASA Astrophysics Data System (ADS)

    Korren, C.; Ferre, E. C.; Yeh, E. C.; Chou, Y. M.

    2014-12-01

    Taiwan, one of the most seismically active areas in the world, repeatedly experiences violent earthquakes, such as the 1999 Mw 7.6 Chi-Chi earthquake, in highly populated areas. The main island of Taiwan lies in the convergent tectonic region between the Eurasian Plate and Philippine Sea Plate. Fault pseudotachylytes form by frictional melting along the fault plane during large seismic slip events and therefore constitute earthquake fossils. The width of a pseudotachylyte generation vein is a crude proxy for earthquake magnitude. The attitude of oblique injection veins primarily reflects slip kinematics. Additional constraints on the seismic slip direction and slip sense can be obtained 1) from the principal axes of the magnetic fabric of generation veins and 2) from 3D tomographic analysis of vein geometry. A new pseudotachylyte locality discovered along the Hoping River offers an unparalleled opportunity to learn more about the Plio-Pleistocene paleoseismology and seismic kinematics of northeastern Taiwan. Field work measured the orientations and relations of structural features yields a complex geometry of generation and injection veins. Pseudotachylytes were sampled for tomographic, magnetic fabric and scanning electron microscope analyses. An oriented block of pseudotachylyte was sliced then stitched into a 3-D tomographic model using the Image-J software image stack plug-in. Tomographic analysis shows pseudotachylyte veins originate from a single slip event at sample size. An average vein thickness ranges from 1 mm proximal to areas with abundant injection veins to 2 mm. The displacement calculated after Sibson's 1975 method, displacement equals the square of vein thickness multiplied by 436 yields a range from 4.36 cm to 17.44 cm. The pseudotachylytes displacement typifies earthquakes less than magnitude 5. However, this crude estimate of displacement requires further discussion. Comparison of the calculated displacements by different methodology may further

  16. 49 CFR Appendix B to Part 26 - Uniform Report of DBE Awards or Commitments and Payments Form

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 1 2011-10-01 2011-10-01 false Uniform Report of DBE Awards or Commitments and Payments Form B Appendix B to Part 26 Transportation Office of the Secretary of Transportation PARTICIPATION BY DISADVANTAGED BUSINESS ENTERPRISES IN DEPARTMENT OF TRANSPORTATION FINANCIAL ASSISTANCE PROGRAMS Pt. 26, App. B Appendix B to Part 26...

  17. 49 CFR Appendix B to Part 26 - Uniform Report of DBE Awards or Commitments and Payments Form

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Uniform Report of DBE Awards or Commitments and Payments Form B Appendix B to Part 26 Transportation Office of the Secretary of Transportation PARTICIPATION BY DISADVANTAGED BUSINESS ENTERPRISES IN DEPARTMENT OF TRANSPORTATION FINANCIAL ASSISTANCE PROGRAMS Pt. 26, App. B Appendix B to Part 26...

  18. Bayesian estimation of source parameters and associated Coulomb failure stress changes for the 2005 Fukuoka (Japan) Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes

    2018-04-01

    Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.

  19. Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations

    NASA Astrophysics Data System (ADS)

    Weng, H.; Yang, H.

    2017-12-01

    Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.

  20. Source Parameters and Rupture Directivities of Earthquakes Within the Mendocino Triple Junction

    NASA Astrophysics Data System (ADS)

    Allen, A. A.; Chen, X.

    2017-12-01

    The Mendocino Triple Junction (MTJ), a region in the Cascadia subduction zone, produces a sizable amount of earthquakes each year. Direct observations of the rupture properties are difficult to achieve due to the small magnitudes of most of these earthquakes and lack of offshore observations. The Cascadia Initiative (CI) project provides opportunities to look at the earthquakes in detail. Here we look at the transform plate boundary fault located in the MTJ, and measure source parameters of Mw≥4 earthquakes from both time-domain deconvolution and spectral analysis using empirical Green's function (EGF) method. The second-moment method is used to infer rupture length, width, and rupture velocity from apparent source duration measured at different stations. Brune's source model is used to infer corner frequency and spectral complexity for stacked spectral ratio. EGFs are selected based on their location relative to the mainshock, as well as the magnitude difference compared to the mainshock. For the transform fault, we first look at the largest earthquake recorded during the Year 4 CI array, a Mw5.72 event that occurred in January of 2015, and select two EGFs, a Mw1.75 and a Mw1.73 located within 5 km of the mainshock. This earthquake is characterized with at least two sub-events, with total duration of about 0.3 second and rupture length of about 2.78 km. The earthquake is rupturing towards west along the transform fault, and both source durations and corner frequencies show strong azimuthal variations, with anti-correlation between duration and corner frequency. The stacked spectral ratio from multiple stations with the Mw1.73 EGF event shows deviation from pure Brune's source model following the definition from Uchide and Imanishi [2016], likely due to near-field recordings with rupture complexity. We will further analyze this earthquake using more EGF events to test the reliability and stability of the results, and further analyze three other Mw≥4 earthquakes

  1. Constraints on recent earthquake source parameters, fault geometry and aftershock characteristics in Oklahoma

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Benz, H.; Herrmann, R. B.; Bergman, E. A.; McMahon, N. D.; Aster, R. C.

    2014-12-01

    In late 2009, the seismicity of Oklahoma increased dramatically. The largest of these earthquakes was a series of three damaging events (Mw 4.8, 5.6, 4.8) that occurred over a span of four days in November 2011 near the town of Prague in central Oklahoma. Studies suggest that these earthquakes were induced by reactivation of the Wilzetta fault due to the disposal of waste water from hydraulic fracturing ("fracking") and other oil and gas activities. The Wilzetta fault is a northeast trending vertical strike-slip fault that is a well known structural trap for oil and gas. Since the November 2011 Prague sequence, thousands of small to moderate (M2-M4) earthquakes have occurred throughout central Oklahoma. The most active regions are located near the towns of Stillwater and Medford in north-central Oklahoma, and Guthrie, Langston and Jones near Oklahoma City. The USGS, in collaboration with the Oklahoma Geological Survey and the University of Oklahoma, has responded by deploying numerous temporary seismic stations in the region in order to record the vigorous aftershock sequences. In this study we use data from the temporary seismic stations to re-locate all Oklahoma earthquakes in the USGS National Earthquake Information Center catalog using a multiple-event approach known as hypo-centroidal decomposition that locates earthquakes with decreased uncertainty relative to one another. Modeling from this study allows us to constrain the detailed geometry of the reactivated faults, as well as source parameters (focal mechanisms, stress drop, rupture length) for the larger earthquakes. Preliminary results from the November 2011 Prague sequence suggest that subsurface rupture lengths of the largest earthquakes are anomalously long with very low stress drop. We also observe very high Q (~1000 at 1 Hz) that explains the large felt areas and we find relatively low b-value and a rapid decay of aftershocks.

  2. Predicting the spatial extent of liquefaction from geospatial and earthquake specific parameters

    USGS Publications Warehouse

    Zhu, Jing; Baise, Laurie G.; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Deodatis, George; Ellingwood, Bruce R.; Frangopol, Dan M.

    2014-01-01

    The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.

  3. Seismic swarm associated with the 2008 eruption of Kasatochi Volcano, Alaska: earthquake locations and source parameters

    USGS Publications Warehouse

    Ruppert, Natalia G.; Prejean, Stephanie G.; Hansen, Roger A.

    2011-01-01

    An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field.

  4. Seismic swarm associated with the 2008 eruption of Kasatochi Volcano, Alaska: Earthquake locations and source parameters

    USGS Publications Warehouse

    Ruppert, N.A.; Prejean, S.; Hansen, R.A.

    2011-01-01

    An energetic seismic swarm accompanied an eruption of Kasatochi Volcano in the central Aleutian volcanic arc in August of 2008. In retrospect, the first earthquakes in the swarm were detected about 1 month prior to the eruption onset. Activity in the swarm quickly intensified less than 48 h prior to the first large explosion and subsequently subsided with decline of eruptive activity. The largest earthquake measured as moment magnitude 5.8, and a dozen additional earthquakes were larger than magnitude 4. The swarm exhibited both tectonic and volcanic characteristics. Its shear failure earthquake features were b value = 0.9, most earthquakes with impulsive P and S arrivals and higher-frequency content, and earthquake faulting parameters consistent with regional tectonic stresses. Its volcanic or fluid-influenced seismicity features were volcanic tremor, large CLVD components in moment tensor solutions, and increasing magnitudes with time. Earthquake location tests suggest that the earthquakes occurred in a distributed volume elongated in the NS direction either directly under the volcano or within 5-10 km south of it. Following the MW 5.8 event, earthquakes occurred in a new crustal volume slightly east and north of the previous earthquakes. The central Aleutian Arc is a tectonically active region with seismicity occurring in the crusts of the Pacific and North American plates in addition to interplate events. We postulate that the Kasatochi seismic swarm was a manifestation of the complex interaction of tectonic and magmatic processes in the Earth's crust. Although magmatic intrusion triggered the earthquakes in the swarm, the earthquakes failed in context of the regional stress field. Copyright ?? 2011 by the American Geophysical Union.

  5. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  6. Source parameters of microearthquakes on an interplate asperity off Kamaishi, NE Japan over two earthquake cycles

    USGS Publications Warehouse

    Uchida, Naoki; Matsuzawa, Toru; Ellsworth, William L.; Imanishi, Kazutoshi; Shimamura, Kouhei; Hasegawa, Akira

    2012-01-01

    We have estimated the source parameters of interplate earthquakes in an earthquake cluster off Kamaishi, NE Japan over two cycles of M~ 4.9 repeating earthquakes. The M~ 4.9 earthquake sequence is composed of nine events that occurred since 1957 which have a strong periodicity (5.5 ± 0.7 yr) and constant size (M4.9 ± 0.2), probably due to stable sliding around the source area (asperity). Using P- and S-wave traveltime differentials estimated from waveform cross-spectra, three M~ 4.9 main shocks and 50 accompanying microearthquakes (M1.5–3.6) from 1995 to 2008 were precisely relocated. The source sizes, stress drops and slip amounts for earthquakes of M2.4 or larger were also estimated from corner frequencies and seismic moments using simultaneous inversion of stacked spectral ratios. Relocation using the double-difference method shows that the slip area of the 2008 M~ 4.9 main shock is co-located with those of the 1995 and 2001 M~ 4.9 main shocks. Four groups of microearthquake clusters are located in and around the mainshock slip areas. Of these, two clusters are located at the deeper and shallower edge of the slip areas and most of these microearthquakes occurred repeatedly in the interseismic period. Two other clusters located near the centre of the mainshock source areas are not as active as the clusters near the edge. The occurrence of these earthquakes is limited to the latter half of the earthquake cycles of the M~ 4.9 main shock. Similar spatial and temporal features of microearthquake occurrence were seen for two other cycles before the 1995 M5.0 and 1990 M5.0 main shocks based on group identification by waveform similarities. Stress drops of microearthquakes are 3–11 MPa and are relatively constant within each group during the two earthquake cycles. The 2001 and 2008 M~ 4.9 earthquakes have larger stress drops of 41 and 27 MPa, respectively. These results show that the stress drop is probably determined by the fault properties and does not change

  7. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  8. Earthquake source parameters along the Hellenic subduction zone and numerical simulations of historical tsunamis in the Eastern Mediterranean

    NASA Astrophysics Data System (ADS)

    Yolsal-Çevikbilen, Seda; Taymaz, Tuncay

    2012-04-01

    We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop

  9. Characterization of the Virginia earthquake effects and source parameters from website traffic analysis

    NASA Astrophysics Data System (ADS)

    Bossu, R.; Lefebvre, S.; Mazet-Roux, G.; Roussel, F.

    2012-12-01

    of inhabitants than localities having experienced weak ground motion. In other words, we observe higher proportion of visitors from localities where the earthquake was widely felt when compared to localities where it was scarcely felt. This opens the way to automatically map the relative level of shaking within minutes of an earthquake's occurrence. In conclusion, the study of the Virginia earthquake shows that eyewitnesses' visits to our website follow the arrival of the P waves at their location. This further demonstrates the real time public desire of information after felt earthquakes, a parameter which should be integrated in the definition of earthquake information services. It also reveals additional capabilities of the flashsourcing method. Earthquakes felt at large distances i.e. where the propagation time to the most distant eyewitnesses exceeds a couple of minutes, can be located and their magnitude estimated in a time frame comparable to the one of automatic seismic locations by real time seismic networks. It also provides very rapid indication on the effects of the earthquakes, by mapping the felt area, detecting the localities affected by network disruption and mapping the relative level of shaking. Such information are essential to improve situation awareness, constrain real time scenario and in in turn, contribute to improved earthquake response.

  10. 3-D Simulation of Earthquakes on the Cascadia Megathrust: Key Parameters and Constraints from Offshore Structure and Seismicity

    NASA Astrophysics Data System (ADS)

    Wirth, E. A.; Frankel, A. D.; Vidale, J. E.; Stone, I.; Nasser, M.; Stephenson, W. J.

    2017-12-01

    The Cascadia subduction zone has a long history of M8 to M9 earthquakes, inferred from coastal subsidence, tsunami records, and submarine landslides. These megathrust earthquakes occur mostly offshore, and an improved characterization of the megathrust is critical for accurate seismic hazard assessment in the Pacific Northwest. We run numerical simulations of 50 magnitude 9 earthquake rupture scenarios on the Cascadia megathrust, using a 3-D velocity model based on geologic constraints and regional seismicity, as well as active and passive source seismic studies. We identify key parameters that control the intensity of ground shaking and resulting seismic hazard. Variations in the down-dip limit of rupture (e.g., extending rupture to the top of the non-volcanic tremor zone, compared to a completely offshore rupture) result in a 2-3x difference in peak ground acceleration (PGA) for the inland city of Seattle, Washington. Comparisons of our simulations to paleoseismic data suggest that rupture extending to the 1 cm/yr locking contour (i.e., mostly offshore) provides the best fit to estimates of coastal subsidence during previous Cascadia earthquakes, but further constraints on the down-dip limit from microseismicity, offshore geodetics, and paleoseismic evidence are needed. Similarly, our simulations demonstrate that coastal communities experience a four-fold increase in PGA depending upon their proximity to strong-motion-generating areas (i.e., high strength asperities) on the deeper portions of the megathrust. An improved understanding of the structure and rheology of the plate interface and accretionary wedge, and better detection of offshore seismicity, may allow us to forecast locations of these asperities during a future Cascadia earthquake. In addition to these parameters, the seismic velocity and attenuation structure offshore also strongly affects the resulting ground shaking. This work outlines the range of plausible ground motions from an M9 Cascadia

  11. Landscape scale prediction of earthquake-induced landsliding based on seismological and geomorphological parameters.

    NASA Astrophysics Data System (ADS)

    Marc, O.; Hovius, N.; Meunier, P.; Rault, C.

    2017-12-01

    In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and

  12. Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology

    2010-12-01

    Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium

  13. Empirical Scaling Relations of Source Parameters For The Earthquake Swarm 2000 At Novy Kostel (vogtland/nw-bohemia)

    NASA Astrophysics Data System (ADS)

    Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.

    Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.

  14. Influence of Earthquake Parameters on Tsunami Wave Height and Inundation

    NASA Astrophysics Data System (ADS)

    Kulangara Madham Subrahmanian, D.; Sri Ganesh, J.; Venkata Ramana Murthy, M.; V, R. M.

    2014-12-01

    After Indian Ocean Tsunami (IOT) on 26th December, 2004, attempts are being made to assess the threat of tsunami originating from different sources for different parts of India. The Andaman - Sumatra trench is segmented by transcurrent faults and differences in the rate of subduction which is low in the north and increases southward. Therefore key board model with initial deformation calculated using different strike directions, slip rates, are used. This results in uncertainties in the earthquake parameters. This study is made to identify the location of origin of most destructive tsunami for Southeast coast of India and to infer the influence of the earthquake parameters in tsunami wave height travel time in deep ocean as well as in the shelf and inundation in the coast. Five tsunamigenic sources were considered in the Andaman - Sumatra trench taking into consideration the tectonic characters of the trench described by various authors and the modeling was carried out using TUNAMI N2 code. The model results were validated using the travel time and runup in the coastal areas and comparing the water elevation along Jason - 1's satellite track. The inundation results are compared from the field data. The assessment of the tsunami threat for the area south of Chennai city the metropolitan city of South India shows that a tsunami originating in Car Nicobar segment of the Andaman - Sumatra subduction zone can generate the most destructive tsunami. Sensitivity analysis in the modelling indicates that fault length influences the results significantly and the tsunami reaches early and with higher amplitude. Strike angle is also modifying the tsunami followed by amount of slip.

  15. New insights on active fault geometries in the Mentawai region of Sumatra, Indonesia, from broadband waveform modeling of earthquake source parameters

    NASA Astrophysics Data System (ADS)

    WANG, X.; Wei, S.; Bradley, K. E.

    2017-12-01

    Global earthquake catalogs provide important first-order constraints on the geometries of active faults. However, the accuracies of both locations and focal mechanisms in these catalogs are typically insufficient to resolve detailed fault geometries. This issue is particularly critical in subduction zones, where most great earthquakes occur. The Slab 1.0 model (Hayes et al. 2012), which was derived from global earthquake catalogs, has smooth fault geometries, and cannot adequately address local structural complexities that are critical for understanding earthquake rupture patterns, coseismic slip distributions, and geodetically monitored interseismic coupling. In this study, we conduct careful relocation and waveform modeling of earthquake source parameters to reveal fault geometries in greater detail. We take advantage of global data and conduct broadband waveform modeling for medium size earthquakes (M>4.5) to refine their source parameters, which include locations and fault plane solutions. The refined source parameters can greatly improve the imaging of fault geometry (e.g., Wang et al., 2017). We apply these approaches to earthquakes recorded since 1990 in the Mentawai region offshore of central Sumatra. Our results indicate that the uncertainty of the horizontal location, depth and dip angle estimation are as small as 5 km, 2 km and 5 degrees, respectively. The refined catalog shows that the 2005 and 2009 "back-thrust" sequences in Mentawai region actually occurred on a steeply landward-dipping fault, contradicting previous studies that inferred a seaward-dipping backthrust. We interpret these earthquakes as `unsticking' of the Sumatran accretionary wedge along a backstop fault that separates accreted material of the wedge from the strong Sunda lithosphere, or reactivation of an old normal fault buried beneath the forearc basin. We also find that the seismicity on the Sunda megathrust deviates in location from Slab 1.0 by up to 7 km, with along strike

  16. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  17. The source parameters of 2013 Mw6.6 Lushan earthquake constrained with the restored local clipped seismic waveforms

    NASA Astrophysics Data System (ADS)

    Hao, J.; Zhang, J. H.; Yao, Z. X.

    2017-12-01

    We developed a method to restore the clipped seismic waveforms near epicenter using projection onto convex sets method (Zhang et al, 2016). This method was applied to rescue the local clipped waveforms of 2013 Mw 6.6 Lushan earthquake. We restored 88 out of 93 clipped waveforms of 38 broadband seismic stations of China Earthquake Networks (CEN). The epicenter distance of the nearest station to the epicenter that we can faithfully restore is only about 32 km. In order to investigate if the source parameters of earthquake could be determined exactly with the restored data, restored waveforms are utilized to get the mechanism of Lushan earthquake. We apply the generalized reflection-transmission coefficient matrix method to calculate the synthetic seismic records and simulated annealing method in inversion (Yao and Harkrider, 1983; Hao et al., 2012). We select 5 stations of CEN with the epicenter distance about 200km whose records aren't clipped and three-component velocity records are used. The result shows the strike, dip and rake angles of Lushan earthquake are 200o, 51o and 87o respectively, hereinafter "standard result". Then the clipped and restored seismic waveforms are applied respectively. The strike, dip and rake angles of clipped seismic waveforms are 184o, 53o and 72o respectively. The largest misfit of angle is 16o. In contrast, the strike, dip and rake angles of restored seismic waveforms are 198o, 51o and 87o respectively. It is very close to the "standard result". We also study the rupture history of Lushan earthquake constrained with the restored local broadband and teleseismic waves based on finite fault method (Hao et al., 2013). The result consists with that constrained with the strong motion and teleseismic waves (Hao et al., 2013), especially the location of the patch with larger slip. In real-time seismology, determining the source parameters as soon as possible is important. This method will help us to determine the mechanism of earthquake

  18. Determination of earthquake source parameters from waveform data for studies of global and regional seismicity

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Chou, T.-A.; Woodhouse, J. H.

    1981-04-01

    It is possible to use the waveform data not only to derive the source mechanism of an earthquake but also to establish the hypocentral coordinates of the `best point source' (the centroid of the stress glut density) at a given frequency. Thus two classical problems of seismology are combined into a single procedure. Given an estimate of the origin time, epicentral coordinates and depth, an initial moment tensor is derived using one of the variations of the method described in detail by Gilbert and Dziewonski (1975). This set of parameters represents the starting values for an iterative procedure in which perturbations to the elements of the moment tensor are found simultaneously with changes in the hypocentral parameters. In general, the method is stable, and convergence rapid. Although the approach is a general one, we present it here in the context of the analysis of long-period body wave data recorded by the instruments of the SRO and ASRO digital network. It appears that the upper magnitude limit of earthquakes that can be processed using this particular approach is between 7.5 and 8.0; the lower limit is, at this time, approximately 5.5, but it could be extended by broadening the passband of the analysis to include energy with periods shorter that 45 s. As there are hundreds of earthquakes each year with magnitudes exceeding 5.5, the seismic source mechanism can now be studied in detail not only for major events but also, for example, for aftershock series. We have investigated the foreshock and several aftershocks of the Sumba earthquake of August 19, 1977; the results show temporal variation of the stress regime in the fault area of the main shock. An area some 150 km to the northwest of the epicenter of the main event became seismically active 49 days later. The sense of the strike-slip mechanism of these events is consistent with the relaxation of the compressive stress in the plate north of the Java trench. Another geophysically interesting result of our

  19. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  20. 3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty

    USGS Publications Warehouse

    Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.

    2017-01-01

    Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in

  1. Estimation of Source Parameters of Historical Major Earthquakes from 1900 to 1970 around Asia and Analysis of Their Uncertainties

    NASA Astrophysics Data System (ADS)

    Han, J.; Zhou, S.

    2017-12-01

    Asia, located in the conjoined areas of Eurasian, Pacific, and Indo-Australian plates, is the continent with highest seismicity. Earthquake catalogue on the bases of modern seismic network recordings has been established since around 1970 in Asia and the earthquake catalogue before 1970 was much more inaccurate because of few stations. With a history of less than 50 years of modern earthquake catalogue, researches in seismology are quite limited. After the appearance of improved Earth velocity structure model, modified locating method and high-accuracy Optical Character Recognition technique, travel time data of earthquakes from 1900 to 1970 can be included in research and more accurate locations can be determined for historical earthquakes. Hence, parameters of these historical earthquakes can be obtained more precisely and some research method such as ETAS model can be used in a much longer time scale. This work focuses on the following three aspects: (1) Relocating more than 300 historical major earthquakes (M≥7.0) in Asia based on the Shide Circulars, International Seismological Summary and EHB Bulletin instrumental records between 1900 and 1970. (2) Calculating the focal mechanisms of more than 50 events by first motion records of P wave of ISS. (3) Based on the geological data, tectonic stress field and the result of relocation, inferring focal mechanisms of historical major earthquakes.

  2. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  3. Land-Ocean-Atmospheric Coupling Associated with Earthquakes

    NASA Astrophysics Data System (ADS)

    Prasad, A. K.; Singh, R. P.; Kumar, S.; Cervone, G.; Kafatos, M.; Zlotnicki, J.

    2007-12-01

    Earthquakes are well known to occur along the plate boundaries and also on the stable shield. The recent studies have shown existence of strong coupling between land-ocean-atmospheric parameters associated with the earthquakes. We have carried out detailed analysis of multi sensor data (optical and microwave remote) to show existence of strong coupling between land-ocean-atmospheric parameters associated with the earthquakes with focal depth up to 30 km and magnitude greater than 5.5. Complimentary nature of various land, ocean and atmospheric parameters will be demonstrated in getting an early warning information about an impending earthquake.

  4. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  5. Source parameters of the 2014 Ms6.5 Ludian earthquake sequence and their implications on the seismogenic structure

    NASA Astrophysics Data System (ADS)

    Zheng, Y.

    2015-12-01

    On August 3, 2014, an Ms6.5 earthquake struck Ludian county, Zhaotong city in Yunnan province, China. Although this earthquake is not very big, it caused abnormal severe damages. Thus, study on the causes of the serious damages of this moderate strong earthquake may help us to evaluate seismic hazards for similar earthquakes. Besides the factors which directly relate to the damages, such as site effects, quality of buildings, seismogenic structures and the characteristics of the mainshock and the aftershocks may also responsible for the seismic hazards. Since focal mechanism solution and centroid depth provide key information of earthquake source properties and tectonic stress field, and the focal depth is one of the most important parameters which control the damages of earthquakes, obtaining precise FMSs and focal depths of the Ludian earthquake sequence may help us to determine the detailed geometric features of the rupture fault and the seismogenic environment. In this work we obtained the FMSs and centroid depths of the Ludian earthquake and its Ms>3.0 aftershocks by the revised CAP method, and further verified some focal depths using the depth phase method. Combining the FMSs of the mainshock and the strong aftershocks, as well as their spatial distributions, and the seismogenic environment of the source region, we can make the following characteristics of the Ludian earthquake sequence and its seismogenic structure: (1) The Ludian earthquake is a left-lateral strike slip earthquake, with magnitude of about Mw6.1. The FMS of nodal plane I is 75o/56o/180o for strike, dip and rake angles, and 165o/90o/34ofor the other nodal plane. (2) The Ludian earthquake is very shallow with the optimum centroid depth of ~3 km, which is consistent with the strong ground shaking and the surface rupture observed by field survey and strengthens the damages of the Ludian earthquake. (3) The Ludian Earthquake should occur on the NNW trend BXF. Because two later aftershocks

  6. Simulation of strong ground motion parameters of the 1 June 2013 Gulf of Suez earthquake, Egypt

    NASA Astrophysics Data System (ADS)

    Toni, Mostafa

    2017-06-01

    This article aims to simulate the ground motion parameters of the moderate magnitude (ML 5.1) June 1, 2013 Gulf of Suez earthquake, which represents the largest instrumental earthquake to be recorded in the middle part of the Gulf of Suez up to now. This event was felt in all cities located on both sides of the Gulf of Suez, with minor damage to property near the epicenter; however, no casualties were observed. The stochastic technique with the site-dependent spectral model is used to simulate the strong ground motion parameters of this earthquake in the cities located at the western side of the Gulf of Suez and north Red Sea namely: Suez, Ain Sokhna, Zafarana, Ras Gharib, and Hurghada. The presence of many tourist resorts and the increase in land use planning in the considered cities represent the motivation of the current study. The simulated parameters comprise the Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV), and Peak Ground Displacement (PGD), in addition to Pseudo Spectral Acceleration (PSA). The model developed for ground motion simulation is validated by using the recordings of three accelerographs installed around the epicenter of the investigated earthquake. Depending on the site effect that has been determined in the investigated areas by using geotechnical data (e.g., shear wave velocities and microtremor recordings), the investigated areas are classified into two zones (A and B). Zone A is characterized by higher site amplification than Zone B. The ground motion parameters are simulated at each zone in the considered areas. The results reveal that the highest values of PGA, PGV, and PGD are observed at Ras Gharib city (epicentral distance ∼ 11 km) as 67 cm/s2, 2.53 cm/s, and 0.45 cm respectively for Zone A, and as 26.5 cm/s2, 1.0 cm/s, and 0.2 cm respectively for Zone B, while the lowest values of PGA, PGV, and PGD are observed at Suez city (epicentral distance ∼ 190 km) as 3.0 cm/s2, 0.2 cm/s, and 0.05 cm/s respectively for Zone A

  7. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  8. The Mw 5.8 Mineral, Virginia, earthquake of August 2011 and aftershock sequence: constraints on earthquake source parameters and fault geometry

    USGS Publications Warehouse

    McNamara, Daniel E.; Benz, H.M.; Herrmann, Robert B.; Bergman, Eric A.; Earle, Paul; Meltzer, Anne; Withers, Mitch; Chapman, Martin

    2014-01-01

    The Mw 5.8 earthquake of 23 August 2011 (17:51:04 UTC) (moment, M0 5.7×1017  N·m) occurred near Mineral, Virginia, within the central Virginia seismic zone and was felt by more people than any other earthquake in United States history. The U.S. Geological Survey (USGS) received 148,638 felt reports from 31 states and 4 Canadian provinces. The USGS PAGER system estimates as many as 120,000 people were exposed to shaking intensity levels of IV and greater, with approximately 10,000 exposed to shaking as high as intensity VIII. Both regional and teleseismic moment tensor solutions characterize the earthquake as a northeast‐striking reverse fault that nucleated at a depth of approximately 7±2  km. The distribution of reported macroseismic intensities is roughly ten times the area of a similarly sized earthquake in the western United States (Horton and Williams, 2012). Near‐source and far‐field damage reports, which extend as far away as Washington, D.C., (135 km away) and Baltimore, Maryland, (200 km away) are consistent with an earthquake of this size and depth in the eastern United States (EUS). Within the first few days following the earthquake, several government and academic institutions installed 36 portable seismograph stations in the epicentral region, making this among the best‐recorded aftershock sequences in the EUS. Based on modeling of these data, we provide a detailed description of the source parameters of the mainshock and analysis of the subsequent aftershock sequence for defining the fault geometry, area of rupture, and observations of the aftershock sequence magnitude–frequency and temporal distribution. The observed slope of the magnitude–frequency curve or b‐value for the aftershock sequence is consistent with previous EUS studies (b=0.75), suggesting that most of the accumulated strain was released by the mainshock. The aftershocks define a rupture that extends between approximately 2–8 km in depth and 8–10 km along

  9. Probabilistic Appraisal of Earthquake Hazard Parameters Deduced from a Bayesian Approach in the Northwest Frontier of the Himalayas

    NASA Astrophysics Data System (ADS)

    Yadav, R. B. S.; Tsapanos, T. M.; Bayrak, Yusuf; Koravos, G. Ch.

    2013-03-01

    A straightforward Bayesian statistic is applied in five broad seismogenic source zones of the northwest frontier of the Himalayas to estimate the earthquake hazard parameters (maximum regional magnitude M max, β value of G-R relationship and seismic activity rate or intensity λ). For this purpose, a reliable earthquake catalogue which is homogeneous for M W ≥ 5.0 and complete during the period 1900 to 2010 is compiled. The Hindukush-Pamir Himalaya zone has been further divided into two seismic zones of shallow ( h ≤ 70 km) and intermediate depth ( h > 70 km) according to the variation of seismicity with depth in the subduction zone. The estimated earthquake hazard parameters by Bayesian approach are more stable and reliable with low standard deviations than other approaches, but the technique is more time consuming. In this study, quantiles of functions of distributions of true and apparent magnitudes for future time intervals of 5, 10, 20, 50 and 100 years are calculated with confidence limits for probability levels of 50, 70 and 90 % in all seismogenic source zones. The zones of estimated M max greater than 8.0 are related to the Sulaiman-Kirthar ranges, Hindukush-Pamir Himalaya and Himalayan Frontal Thrusts belt; suggesting more seismically hazardous regions in the examined area. The lowest value of M max (6.44) has been calculated in Northern-Pakistan and Hazara syntaxis zone which have estimated lowest activity rate 0.0023 events/day as compared to other zones. The Himalayan Frontal Thrusts belt exhibits higher earthquake magnitude (8.01) in next 100-years with 90 % probability level as compared to other zones, which reveals that this zone is more vulnerable to occurrence of a great earthquake. The obtained results in this study are directly useful for the probabilistic seismic hazard assessment in the examined region of Himalaya.

  10. Disruption of thyroxine and sex hormones by 1,2-dibromo-4-(1,2-dibromoethyl)cyclohexane (DBE-DBCH) in American kestrels (Falco sparverius) and associations with reproductive and behavioral changes.

    PubMed

    Marteinson, Sarah C; Palace, Vince; Letcher, Robert J; Fernie, Kim J

    2017-04-01

    1,2-dibromo-4-(1,2-dibromoethyl)cyclohexane (DBE-DBCH - formerly TBECH) is an emerging brominated flame retardant (BFR) pollutant with androgen potentiating ability and other endocrine disrupting effects in birds and fish. The objectives of this study were to determine the effects of exposure to environmentally-relevant levels of DBE-DBCH on circulating levels of thyroid and sex steroid hormones in American kestrels, and if hormonal concentrations were related to previously reported changes in reproductive success and courtship behaviors. Sixteen kestrel pairs were exposed to 0.239ng β-DBE-DBCH/g kestrel/day by diet, based on concentrations in wild bird eggs, from 4 weeks before pairing until the chicks hatched (mean 82 d), and were compared with vehicle-only-exposed control pairs (n=15). As previously reported, DBE-DBCH concentrations were not detected in tissue or eggs of these birds, nor were any potential metabolites, despite the low method limits of detection (≤0.4ng/g wet weight), suggesting it may be rapidly metabolized and/or eliminated by the kestrels. Nevertheless, exposed kestrels demonstrated changes in reproduction and behavior, indicating an effect from exposure. During early breeding, males were sampled at multiple time points at pairing and during courtship and incubation; females were blood sampled at pairing only; both sexes were sampled at the end of the season. All comparisons are made to control males or control females, and the relative differences in hormone concentrations between treatment and control birds, calculated separately for each sex, are presented for each time point. Males exposed to β-DBE-DBCH demonstrated significantly (p=0.05) lower concentrations of total thyroxine (TT 4 ) overall, that were 11-28% lower than those of control males at the individual sampling points, yet significantly higher (p=0.03) concentrations of free thyroxine (FT 4 ), that were 5-13% higher than those of control males at the individual sampling

  11. Prediction of the area affected by earthquake-induced landsliding based on seismological parameters

    NASA Astrophysics Data System (ADS)

    Marc, Odin; Meunier, Patrick; Hovius, Niels

    2017-07-01

    We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

  12. Applicability of source scaling relations for crustal earthquakes to estimation of the ground motions of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Irikura, Kojiro; Miyakoshi, Ken; Kamae, Katsuhiro; Yoshida, Kunikazu; Somei, Kazuhiro; Kurahashi, Susumu; Miyake, Hiroe

    2017-01-01

    A two-stage scaling relationship of the source parameters for crustal earthquakes in Japan has previously been constructed, in which source parameters obtained from the results of waveform inversion of strong motion data are combined with parameters estimated based on geological and geomorphological surveys. A three-stage scaling relationship was subsequently developed to extend scaling to crustal earthquakes with magnitudes greater than M w 7.4. The effectiveness of these scaling relationships was then examined based on the results of waveform inversion of 18 recent crustal earthquakes ( M w 5.4-6.9) that occurred in Japan since the 1995 Hyogo-ken Nanbu earthquake. The 2016 Kumamoto earthquake, with M w 7.0, was one of the largest earthquakes to occur since dense and accurate strong motion observation networks, such as K-NET and KiK-net, were deployed after the 1995 Hyogo-ken Nanbu earthquake. We examined the applicability of the scaling relationships of the source parameters of crustal earthquakes in Japan to the 2016 Kumamoto earthquake. The rupture area and asperity area were determined based on slip distributions obtained from waveform inversion of the 2016 Kumamoto earthquake observations. We found that the relationship between the rupture area and the seismic moment for the 2016 Kumamoto earthquake follows the second-stage scaling within one standard deviation ( σ = 0.14). The ratio of the asperity area to the rupture area for the 2016 Kumamoto earthquake is nearly the same as ratios previously obtained for crustal earthquakes. Furthermore, we simulated the ground motions of this earthquake using a characterized source model consisting of strong motion generation areas (SMGAs) based on the empirical Green's function (EGF) method. The locations and areas of the SMGAs were determined through comparison between the synthetic ground motions and observed motions. The sizes of the SMGAs were nearly coincident with the asperities with large slip. The synthetic

  13. Source Parameter Inversion for Recent Great Earthquakes from a Decade-long Observation of Global Gravity Fields

    NASA Technical Reports Server (NTRS)

    Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile

    2013-01-01

    We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.

  14. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  15. Source parameters of a M4.8 and its accompanying repeating earthquakes off Kamaishi, NE Japan: Implications for the hierarchical structure of asperities and earthquake cycle

    USGS Publications Warehouse

    Uchida, N.; Matsuzawa, T.; Ellsworth, W.L.; Imanishi, K.; Okada, T.; Hasegawa, A.

    2007-01-01

    We determine the source parameters of a M4.9 ?? 0.1 'characteristic earthquake' sequence and its accompanying microearthquakes at ???50 km depth on the subduction plate boundary offshore of Kamaishi, NE Japan. The microearthquakes tend to occur more frequently in the latter half of the recurrence intervals of the M4.9 ?? 0.1 events. Our results show that the microearthquakes are repeating events and they are located not only around but also within the slip area for the 2001 M4.8 event. From the hierarchical structure of slip areas and smaller stress drops for the microearthquakes compared to the M4.8 event, we infer the small repeating earthquakes rupture relatively weak patches in and around the slip area for the M4.8 event and their activity reflects a stress concentration process and/or change in frictional property (healing) at the area. We also infer the patches for the M4.9 ?? 0.1 and other repeating earthquakes undergo aseismic slip during their interseismic period. Copyright 2007 by the American Geophysical Union.

  16. Fault parameter constraints using relocated earthquakes: A validation of first-motion focal-mechanism data

    USGS Publications Warehouse

    Kilb, Debi; Hardebeck, J.L.

    2006-01-01

    We estimate the strike and dip of three California fault segments (Calaveras, Sargent, and a portion of the San Andreas near San Jaun Bautistia) based on principle component analysis of accurately located microearthquakes. We compare these fault orientations with two different first-motion focal mechanism catalogs: the Northern California Earthquake Data Center (NCEDC) catalog, calculated using the FPFIT algorithm (Reasenberg and Oppenheimer, 1985), and a catalog created using the HASH algorithm that tests mechanism stability relative to seismic velocity model variations and earthquake location (Hardebeck and Shearer, 2002). We assume any disagreement (misfit >30° in strike, dip, or rake) indicates inaccurate focal mechanisms in the catalogs. With this assumption, we can quantify the parameters that identify the most optimally constrained focal mechanisms. For the NCEDC/FPFIT catalogs, we find that the best quantitative discriminator of quality focal mechanisms is the station distribution ratio (STDR) parameter, an indicator of how the stations are distributed about the focal sphere. Requiring STDR > 0.65 increases the acceptable mechanisms from 34%–37% to 63%–68%. This suggests stations should be uniformly distributed surrounding, rather than aligning, known fault traces. For the HASH catalogs, the fault plane uncertainty (FPU) parameter is the best discriminator, increasing the percent of acceptable mechanisms from 63%–78% to 81%–83% when FPU ≤ 35°. The overall higher percentage of acceptable mechanisms and the usefulness of the formal uncertainty in identifying quality mechanisms validate the HASH approach of testing for mechanism stability.

  17. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the

  18. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  19. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  20. Fault Parameters of the 1868 and 1877 earthquakes, inferred from historical records: Run-up measurements, Isoseismals and coseismic deformation

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Ruiz, S.; Yamazaki, Y.; Campos, J.

    2012-04-01

    The Mega-thrust zone of southern Peru and northern Chile is recognized as a tsunamigenic zone. In Southern Peru and Northern Chile, large earthquakes have not occurred in the last 130 years. The 1868 and 1877 were the last earthquakes with rupture larger than 400 km. The fault parameters and slip distribution of these earthquakes is not well understood, because only a few tide gauges recorded these events at far field distance. We studied simultaneously the near field effect, run-up, isoseismals, coseismic historical descriptions and far field tide gauges in the Pacific Ocean. We define several rupture scenerios which are numerically modeled using NEOWAVE program obtaining the tsunami propagation and coseismic deformation. New coupling models from are used to model scenarios. These results are compared with historical near field and far field observations, our preferred scenario fitted well these records and it agrees with the proposed isoseismals. For 1868 southern Peru earthquake our preferred scenario has a seismic rupture starting at the south part of 2001 Camaná Peru earthquake 16.8°S to 19.3°S through the Arica bending at 18°S, with a rupture of 350-400 km, maximum slip of 15 meters and seismic magnitude between M_w~8.7-8.9. For the 1877 earthquake our preferred scenario has a length of 400 kilometers from 23°S to 19.3°S, a maximum slip of 25 meters and seismic moderate magnitude of M_w~8.8. In both earthquakes the dip (10°-20°) is controlled by the geometry of subducting Nazca plate and larger slip distributions are located in the shallow part of the contact, from the trench to 30 km depth. Finally strong slip distribution in the shallow seismic contact for these historical mega-earthquakes could explain the apparent dual behavior between these mega-earthquakes Mw > 8.5 and moderate magnitude earthquakes Mw ~ 8.0 which apparently only have occurred in the depth zone of the contact i.e., the earthquakes of 1967 Mw 6.7 and 2007 Mw 7.7 in Tocopilla

  1. GPS Technologies as a Tool to Detect the Pre-Earthquake Signals Associated with Strong Earthquakes

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Krankowski, A.; Hernandez-Pajares, M.; Liu, J. Y. G.; Hattori, K.; Davidenko, D.; Ouzounov, D.

    2015-12-01

    The existence of ionospheric anomalies before earthquakes is now widely accepted. These phenomena started to be considered by GPS community to mitigate the GPS signal degradation over the territories of the earthquake preparation. The question is still open if they could be useful for seismology and for short-term earthquake forecast. More than decade of intensive studies proved that ionospheric anomalies registered before earthquakes are initiated by processes in the boundary layer of atmosphere over earthquake preparation zone and are induced in the ionosphere by electromagnetic coupling through the Global Electric Circuit. Multiparameter approach based on the Lithosphere-Atmosphere-Ionosphere Coupling model demonstrated that earthquake forecast is possible only if we consider the final stage of earthquake preparation in the multidimensional space where every dimension is one from many precursors in ensemble, and they are synergistically connected. We demonstrate approaches developed in different countries (Russia, Taiwan, Japan, Spain, and Poland) within the framework of the ISSI and ESA projects) to identify the ionospheric precursors. They are also useful to determine the all three parameters necessary for the earthquake forecast: impending earthquake epicenter position, expectation time and magnitude. These parameters are calculated using different technologies of GPS signal processing: time series, correlation, spectral analysis, ionospheric tomography, wave propagation, etc. Obtained results from different teams demonstrate the high level of statistical significance and physical justification what gives us reason to suggest these methodologies for practical validation.

  2. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  3. Contrasts between source parameters of M [>=] 5. 5 earthquakes in northern Baja California and southern California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doser, D.I.

    1993-04-01

    Source parameters determined from the body waveform modeling of large (M [>=] 5.5) historic earthquakes occurring between 1915 and 1956 along the San Jacinto and Imperial fault zones of southern California and the Cerro Prieto, Tres Hermanas and San Miguel fault zones of Baja California have been combined with information from post-1960's events to study regional variations in source parameters. The results suggest that large earthquakes along the relatively young San Miguel and Tres Hermanas fault zones have complex rupture histories, small source dimensions (< 25 km), high stress drops (60 bar average), and a high incidence of foreshock activity.more » This may be a reflection of the rough, highly segmented nature of the young faults. In contrast, Imperial-Cerro Prieto events of similar magnitude have low stress drops (16 bar average) and longer rupture lengths (42 km average), reflecting rupture along older, smoother fault planes. Events along the San Jacinto fault zone appear to lie in between these two groups. These results suggest a relationship between the structural and seismological properties of strike-slip faults that should be considered during seismic risk studies.« less

  4. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  5. Testing and comparison of three frequency-based magnitude estimating parameters for earthquake early warning based events in the Yunnan region, China in 2014

    NASA Astrophysics Data System (ADS)

    Zhang, Jianjing; Li, Hongjie

    2018-06-01

    To mitigate potential seismic disasters in the Yunnan region, China, building up suitable magnitude estimation scaling laws for an earthquake early warning system (EEWS) is in high demand. In this paper, the records from the main and after-shocks of the Yingjiang earthquake (M W 5.9), the Ludian earthquake (M W 6.2) and the Jinggu earthquake (M W 6.1), which occurred in Yunnan in 2014, were used to develop three estimators, including the maximum of the predominant period ({{τ }{{p}}}\\max ), the characteristic period (τ c) and the log-average period (τ log), for estimating earthquake magnitude. The correlations between these three frequency-based parameters and catalog magnitudes were developed, compared and evaluated against previous studies. The amplitude and period of seismic waves might be amplified in the Ludian mountain-canyon area by multiple reflections and resonance, leading to excessive values of the calculated parameters, which are consistent with Sichuan’s scaling. As a result, τ log was best correlated with magnitude and τ c had the highest slope of regression equation, while {{τ }{{p}}}\\max performed worst with large scatter and less sensitivity for the change of magnitude. No evident saturation occurred in the case of M 6.1 and M 6.2 in this study. Even though both τ c and τ log performed similarly and can well reflect the size of the Earthquake, τ log has slightly fewer prediction errors for small scale earthquakes (M ≤ 4.5), which was also observed by previous research. Our work offers an insight into the feasibility of a EEWS in Yunnan, China, and this study shows that it is necessary to build up an appropriate scaling law suitable for the warning region.

  6. Comparison of hypocentre parameters of earthquakes in the Aegean region

    NASA Astrophysics Data System (ADS)

    Özel, Nurcan M.; Shapira, Avi; Harris, James

    2007-06-01

    The Aegean Sea is one of the more seismically active areas in the Euro-Mediterranean region. The seismic activity in the Aegean Sea is monitored by a number of local agencies that contribute their data to the International Seismological Centre (ISC). Consequently, the ISC Bulletin may serve as a reliable reference for assessing the capabilities of local agencies to monitor moderate and low magnitude earthquakes. We have compared bulletins of the Kandilli Observatory and Earthquake Research Institute (KOERI) and the ISC, for the period 1976-2003 that comprises the most complete data sets for both KOERI and ISC. The selected study area is the East Aegean Sea and West Turkey, bounded by latitude 35-41°N and by longitude 24-29°E. The total number of events known to occur in this area, during 1976-2003 is about 41,638. Seventy-two percent of those earthquakes were located by ISC and 75% were located by KOERI. As expected, epicentre location discrepancy between ISC and KOERI solutions are larger as we move away from the KOERI seismic network. Out of the 22,066 earthquakes located by both ISC and KOERI, only 4% show a difference of 50 km or more. About 140 earthquakes show a discrepancy of more than 100 km. Focal Depth determinations differ mainly in the subduction zone along the Hellenic arc. Less than 2% of the events differ in their focal depth by more than 25 km. Yet, the location solutions of about 30 events differ by more than 100 km. Almost a quarter of the events listed in the ISC Bulletin are missed by KOERI, most of them occurring off the coast of Turkey, in the East Aegean. Based on the frequency-magnitude distributions, the KOERI Bulletin is complete for earthquakes with duration magnitudes Md > 2.7 (both located and assigned magnitudes) where as the threshold magnitude for events with location and magnitude determinations by ISC is mb > 4.0. KOERI magnitudes seem to be poorly correlated with ISC magnitudes suggesting relatively high uncertainty in the

  7. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so

  8. Investigating Along-Strike Variations of Source Parameters for Relocated Thrust Earthquakes Along the Sumatra-Java Subduction Zone

    NASA Astrophysics Data System (ADS)

    El Hariri, M.; Bilek, S. L.; Deshon, H. R.; Engdahl, E. R.

    2009-12-01

    Some earthquakes generate anomalously large tsunami waves relative to their surface wave magnitudes (Ms). This class of events, known as tsunami earthquakes, is characterized by having a long rupture duration and low radiated energy at long periods. These earthquakes are relatively rare. There have been only 9 documented cases, including 2 in the Java subduction zone (1994 Mw=7.8 and the 2006 Mw=7.7). Several models have been proposed to explain the unexpectedly large tsunami, such as displacement along high-angle splay faults, landslide-induced tsunami due to coseismic shaking, or large seismic slip within low rigidity sediments or weaker material along the shallowest part of the subduction zone. Slow slip has also been suggested along portions of the 2004 Mw=9.2 Sumatra-Andaman earthquake zone. In this study we compute the source parameters of 90 relocated shallow thrust events (Mw 5.1-7.8) along the Sumatra-Java subduction zone including the two Java tsunami earthquakes. Events are relocated using a modification to the Engdahl, van der Hilst and Buland (EHB) earthquake relocation method that incorporates an automated frequency-dependent phase detector. This allows for the use of increased numbers of phase arrival times, especially depth phases, and improves hypocentral locations. Source time functions, rupture duration and depth estimates are determined using multi-station deconvolution of broadband teleseismic P and SH waves. We seek to correlate any along-strike variation in rupture characteristics with tectonic features and rupture characteristics of the previous slow earthquakes along this margin to gain a better understanding of the conditions resulting in slow ruptures. Preliminary results from the analysis of these events show that in addition to depth-dependent variations there are also along-strike variations in rupture duration. We find that along the Java segment, the longer duration event locates in a highly coupled region corresponding to the

  9. Qualitative Investigation of the Earthquake Precuesors Prior to the March 14,2012 Earthquake in Japan

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Shailesh Kumar; Gwal, Ashok Kumar

    Abstract: In this study we have used the Empirical Mode Decomposition (EMD) method in conjunction with the Cross Correlation analysis to analyze ionospheric foF2 parameter Japan earthquake with magnitude M = 6.9. The data are collected from Kokubunji (35.70N, 139.50E) and Yamakawa (31.20N, 130.60E) ionospheric stations. The EMD method was used for removing the geophysical noise from the foF2 data and then to calculate the correlation coefficient between them. It was found that the ionospheric foF2 parameter shows anomalous change few days before the earthquake. The results are in agreement with the theoretical model evidencing ionospheric modification prior to Japan earthquake in a certain area around the epicenter.

  10. Multi-Parameter Observation and Detection of Pre-Earthquake Signals in Seismically Active Areas

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Parrot, M.; Liu, J. Y.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    The recent large earthquakes (M9.0 Tohoku, 03/2011; M7.0 Haiti, 01/2010; M6.7 L Aquila, 04/2008; and M7.9 Wenchuan 05/2008) have renewed interest in pre-anomalous seismic signals associated with them. Recent workshops (DEMETER 2006, 2011 and VESTO 2009 ) have shown that there were precursory atmospheric /ionospheric signals observed in space prior to these events. Our initial results indicate that no single pre-earthquake observation (seismic, magnetic field, electric field, thermal infrared [TIR], or GPS/TEC) can provide a consistent and successful global scale early warning. This is most likely due to complexity and chaotic nature of earthquakes and the limitation in existing ground (temporal/spatial) and global satellite observations. In this study we analyze preseismic temporal and spatial variations (gas/radon counting rate, atmospheric temperature and humidity change, long-wave radiation transitions and ionospheric electron density/plasma variations) which we propose occur before the onset of major earthquakes:. We propose an Integrated Space -- Terrestrial Framework (ISTF), as a different approach for revealing pre-earthquake phenomena in seismically active areas. ISTF is a sensor web of a coordinated observation infrastructure employing multiple sensors that are distributed on one or more platforms; data from satellite sensors (Terra, Aqua, POES, DEMETER and others) and ground observations, e.g., Global Positioning System, Total Electron Content (GPS/TEC). As a theoretical guide we use the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model to explain the generation of multiple earthquake precursors. Using our methodology, we evaluated retrospectively the signals preceding the most devastated earthquakes during 2005-2011. We observed a correlation between both atmospheric and ionospheric anomalies preceding most of these earthquakes. The second phase of our validation include systematic retrospective analysis for more than 100 major earthquakes (M>5

  11. Estimation of regression laws for ground motion parameters using as case of study the Amatrice earthquake

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni

    2017-04-01

    The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

  12. Earthquake source parameters for the 2010 western Gulf of Aden rifting episode

    NASA Astrophysics Data System (ADS)

    Shuler, Ashley; Nettles, Meredith

    2012-08-01

    On 2010 November 14, an intense swarm of earthquakes began in the western Gulf of Aden. Within a 48-hr period, 82 earthquakes with magnitudes between 4.5 and 5.5 were reported along an ˜80-km-long segment of the east-west trending Aden Ridge, making this swarm one of the largest ever observed in an extensional oceanic setting. In this study, we calculate centroid-moment-tensor solutions for 110 earthquakes that occurred between 2010 November and 2011 April. Over 80 per cent of the cumulative seismic moment results from earthquakes that occurred within 1 week of the onset of the swarm. We find that this sequence has a b-value of ˜1.6 and is dominated by normal-faulting earthquakes that, early in the swarm, migrate westwards with time. These earthquakes are located in rhombic basins along a section of the ridge that was previously characterized by low levels of seismicity and a lack of recent volcanism on the seafloor. Body-wave modelling demonstrates that the events occur in the top 2-3 km of the crust. Nodal planes of the normal-faulting earthquakes are consistent with previously mapped faults in the axial valley. A small number of strike-slip earthquakes observed between two basins near 44°E, where the axial valley changes orientation, depth and width, likely indicate the presence of an incipient transform fault and the early stages of ridge-transform segmentation. The direction of extension accommodated by the earthquakes is intermediate between the rift orthogonal and the direction of relative motion between the Arabian and Somalian plates, consistent with the oblique style of rifting occurring along the slow-spreading Aden Ridge. The 2010 swarm shares many characteristics with dyke-induced rifting episodes from both oceanic and continental settings. We conclude that the 2010 swarm represents the seismic component of an undersea magmatic rifting episode along the nascent Aden Ridge, and attribute the large size of the earthquakes to the combined effects of

  13. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  14. The Multi-Parameter Wireless Sensing System (MPwise): Its Description and Application to Earthquake Risk Mitigation.

    PubMed

    Boxberger, Tobias; Fleming, Kevin; Pittore, Massimiliano; Parolai, Stefano; Pilz, Marco; Mikulla, Stefan

    2017-10-20

    The Multi-Parameter Wireless Sensing (MPwise) system is an innovative instrumental design that allows different sensor types to be combined with relatively high-performance computing and communications components. These units, which incorporate off-the-shelf components, can undertake complex information integration and processing tasks at the individual unit or node level (when used in a network), allowing the establishment of networks that are linked by advanced, robust and rapid communications routing and network topologies. The system (and its predecessors) was originally designed for earthquake risk mitigation, including earthquake early warning (EEW), rapid response actions, structural health monitoring, and site-effect characterization. For EEW, MPwise units are capable of on-site, decentralized, independent analysis of the recorded ground motion and based on this, may issue an appropriate warning, either by the unit itself or transmitted throughout a network by dedicated alarming procedures. The multi-sensor capabilities of the system allow it to be instrumented with standard strong- and weak-motion sensors, broadband sensors, MEMS (namely accelerometers), cameras, temperature and humidity sensors, and GNSS receivers. In this work, the MPwise hardware, software and communications schema are described, as well as an overview of its possible applications. While focusing on earthquake risk mitigation actions, the aim in the future is to expand its capabilities towards a more multi-hazard and risk mitigation role. Overall, MPwise offers considerable flexibility and has great potential in contributing to natural hazard risk mitigation.

  15. The Multi-Parameter Wireless Sensing System (MPwise): Its Description and Application to Earthquake Risk Mitigation

    PubMed Central

    Boxberger, Tobias; Fleming, Kevin; Pittore, Massimiliano; Parolai, Stefano; Pilz, Marco; Mikulla, Stefan

    2017-01-01

    The Multi-Parameter Wireless Sensing (MPwise) system is an innovative instrumental design that allows different sensor types to be combined with relatively high-performance computing and communications components. These units, which incorporate off-the-shelf components, can undertake complex information integration and processing tasks at the individual unit or node level (when used in a network), allowing the establishment of networks that are linked by advanced, robust and rapid communications routing and network topologies. The system (and its predecessors) was originally designed for earthquake risk mitigation, including earthquake early warning (EEW), rapid response actions, structural health monitoring, and site-effect characterization. For EEW, MPwise units are capable of on-site, decentralized, independent analysis of the recorded ground motion and based on this, may issue an appropriate warning, either by the unit itself or transmitted throughout a network by dedicated alarming procedures. The multi-sensor capabilities of the system allow it to be instrumented with standard strong- and weak-motion sensors, broadband sensors, MEMS (namely accelerometers), cameras, temperature and humidity sensors, and GNSS receivers. In this work, the MPwise hardware, software and communications schema are described, as well as an overview of its possible applications. While focusing on earthquake risk mitigation actions, the aim in the future is to expand its capabilities towards a more multi-hazard and risk mitigation role. Overall, MPwise offers considerable flexibility and has great potential in contributing to natural hazard risk mitigation. PMID:29053608

  16. An empirical model for global earthquake fatality estimation

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David

    2010-01-01

    We analyzed mortality rates of earthquakes worldwide and developed a country/region-specific empirical model for earthquake fatality estimation within the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is defined as total killed divided by total population exposed at specific shaking intensity level. The total fatalities for a given earthquake are estimated by multiplying the number of people exposed at each shaking intensity level by the fatality rates for that level and then summing them at all relevant shaking intensities. The fatality rate is expressed in terms of a two-parameter lognormal cumulative distribution function of shaking intensity. The parameters are obtained for each country or a region by minimizing the residual error in hindcasting the total shaking-related deaths from earthquakes recorded between 1973 and 2007. A new global regionalization scheme is used to combine the fatality data across different countries with similar vulnerability traits.

  17. Earthquake focal parameters and lithospheric structure of the anatolian plateau from complete regional waveform modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodgers, A

    2000-12-28

    This is an informal report on preliminary efforts to investigate earthquake focal mechanisms and earth structure in the Anatolian (Turkish) Plateau. Seismic velocity structure of the crust and upper mantle and earthquake focal parameters for event in the Anatolian Plateau are estimated from complete regional waveforms. Focal mechanisms, depths and seismic moments of moderately large crustal events are inferred from long-period (40-100 seconds) waveforms and compared with focal parameters derived from global teleseismic data. Using shorter periods (10-100 seconds) we estimate the shear and compressional velocity structure of the crust and uppermost mantle. Results are broadly consistent with previous studiesmore » and imply relatively little crustal thickening beneath the central Anatolian Plateau. Crustal thickness is about 35 km in western Anatolia and greater than 40 km in eastern Anatolia, however the long regional paths require considerable averaging and limit resolution. Crustal velocities are lower than typical continental averages, and even lower than typical active orogens. The mantle P-wave velocity was fixed to 7.9 km/s, in accord with tomographic models. A high sub-Moho Poisson's Ratio of 0.29 was required to fit the Sn-Pn differential times. This is suggestive of high sub-Moho temperatures, high shear wave attenuation and possibly partial melt. The combination of relatively thin crust in a region of high topography and high mantle temperatures suggests that the mantle plays a substantial role in maintaining the elevation.« less

  18. Bayesian historical earthquake relocation: an example from the 1909 Taipei earthquake

    USGS Publications Warehouse

    Minson, Sarah E.; Lee, William H.K.

    2014-01-01

    Locating earthquakes from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both large phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the earthquake. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei earthquake. While our epicentre location and origin time for the 1909 Taipei earthquake are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.

  19. Estimating Intensities and/or Strong Motion Parameters Using Civilian Monitoring Videos: The May 12, 2008, Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolin; Wu, Zhongliang; Jiang, Changsheng; Xia, Min

    2011-05-01

    One of the important issues in macroseismology and engineering seismology is how to get as much intensity and/or strong motion data as possible. We collected and studied several cases in the May 12, 2008, Wenchuan earthquake, exploring the possibility of estimating intensities and/or strong ground motion parameters using civilian monitoring videos which were deployed originally for security purposes. We used 53 video recordings in different places to determine the intensity distribution of the earthquake, which is shown to be consistent with the intensity distribution mapped by field investigation, and even better than that given by the Community Internet Intensity Map. In some of the videos, the seismic wave propagation is clearly visible, and can be measured with the reference of some artificial objects such as cars and/or trucks. By measuring the propagating wave, strong motion parameters can be roughly but quantitatively estimated. As a demonstration of this `propagating-wave method', we used a series of civilian videos recorded in different parts of Sichuan and Shaanxi and estimated the local PGAs. The estimate is compared with the measurement reported by strong motion instruments. The result shows that civilian monitoring video provide a practical way of collecting and estimating intensity and/or strong motion parameters, having the advantage of being dynamic, and being able to be played back for further analysis, reflecting a new trend for macroseismology in our digital era.

  20. Regional propagation characteristics and source parameters of earthquakes in northeastern North America

    USGS Publications Warehouse

    Boatwright, John

    1994-01-01

    The vertical components of the S wave trains recorded on the Eastern Canadian Telemetered Network (ECTN) from 1980 through 1990 have been spectrally analyzed for source, site, and propagation characteristics. The data set comprises some 1033 recordings of 97 earthquakes whose magnitudes range from M ≈ 3 to 6. The epicentral distances range from 15 to 1000 km, with most of the data set recorded at distances from 200 to 800 km. The recorded S wave trains contain the phases S, SmS, Sn, and Lg and are sampled using windows that increase with distance; the acceleration spectra were analyzed from 1.0 to 10 Hz. To separate the source, site, and propagation characteristics, an inversion for the earthquake corner frequencies, low-frequency levels, and average attenuation parameters is alternated with a regression of residuals onto the set of stations and a grid of 14 distances ranging from 25 to 1000 km. The iteration between these two parts of the inversion converges in about 60 steps. The average attenuation parameters obtained from the inversion were Q = 1997 ± 10 and γ = 0.998 ± 0.003. The most pronounced variation from this average attenuation is a marked deamplification of more than a factor of 2 at 63 km and 2 Hz, which shallows with increasing frequency and increasing distance out to 200 km. The site-response spectra obtained for the ECTN stations are generally flat. The source spectral shape assumed in this inversion provides an adequate spectral model for the smaller events (Mo < 3 × 1021 dyne-cm) in the data set, whose Brune stress drops range from 5 to 150 bars. For the five events in the data set with Mo ≧ 1023 dyne-cm, however, the source spectra obtained by regressing the residuals suggest that an ω2 spectrum is an inadequate model for the spectral shape. In particular, the corner frequencies for most of these large events appear to be split, so that the spectra exhibit an intermediate behavior (where |ü(ω)| is roughly proportional to ω).

  1. Determination of source parameters of the 2017 Mount Agung volcanic earthquake from moment-tensor inversion method using local broadband seismic waveforms

    NASA Astrophysics Data System (ADS)

    Madlazim; Prastowo, T.; Supardiyono; Hardy, T.

    2018-03-01

    Monitoring of volcanoes has been an important issue for many purposes, particularly hazard mitigation. With regard to this, the aims of the present work are to estimate and analyse source parameters of a volcanic earthquake driven by recent magmatic events of Mount Agung in Bali island that occurred on September 28, 2017. The broadband seismogram data consisting of 3 local component waveforms were recorded by the IA network of 5 seismic stations: SRBI, DNP, BYJI, JAGI, and TWSI (managed by BMKG). These land-based observatories covered a full 4-quadrant region surrounding the epicenter. The methods used in the present study were seismic moment-tensor inversions, where the data were all analyzed to extract the parameters, namely moment magnitude, type of a volcanic earthquake indicated by percentages of seismic components: compensated linear vector dipole (CLVD), isotropic (ISO), double-couple (DC), and source depth. The results are given in the forms of variance reduction of 65%, a magnitude of M W 3.6, a CLVD of 40%, an ISO of 33%, a DC of 27% and a centroid-depth of 9.7 km. These suggest that the unusual earthquake was dominated by a vertical CLVD component, implying the dominance of uplift motion of magmatic fluid flow inside the volcano.

  2. Source parameters and moment tensor of the ML 4.6 earthquake of November 19, 2011, southwest Sharm El-Sheikh, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, Gad-Elkareem Abdrabou; Omar, Khaled

    2014-06-01

    The southern part of the Gulf of Suez is one of the most seismically active areas in Egypt. On Saturday November 19, 2011 at 07:12:15 (GMT) an earthquake of ML 4.6 occurred in southwest Sharm El-Sheikh, Egypt. The quake has been felt at Sharm El-Sheikh city while no casualties were reported. The instrumental epicenter is located at 27.69°N and 34.06°E. Seismic moment is 1.47 E+22 dyne cm, corresponding to a moment magnitude Mw 4.1. Following a Brune model, the source radius is 101.36 m with an average dislocation of 0.015 cm and a 0.06 MPa stress drop. The source mechanism from a fault plane solution shows a normal fault, the actual fault plane is strike 358, dip 34 and rake -60, the computer code ISOLA is used. Twenty seven small and micro earthquakes (1.5 ⩽ ML ⩽ 4.2) were also recorded by the Egyptian National Seismological Network (ENSN) from the same region. We estimate the source parameters for these earthquakes using displacement spectra. The obtained source parameters include seismic moments of 2.77E+16-1.47E+22 dyne cm, stress drops of 0.0005-0.0617 MPa and relative displacement of 0.0001-0.0152 cm.

  3. Preliminary Result of Earthquake Source Parameters the Mw 3.4 at 23:22:47 IWST, August 21, 2004, Centre Java, Indonesia Based on MERAMEX Project

    NASA Astrophysics Data System (ADS)

    Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.

    2018-03-01

    In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.

  4. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  5. The surface latent heat flux anomalies related to major earthquake

    NASA Astrophysics Data System (ADS)

    Jing, Feng; Shen, Xuhui; Kang, Chunli; Xiong, Pan; Hong, Shunying

    2011-12-01

    SLHF (Surface Latent Heat Flux) is an atmospheric parameter, which can describe the heat released by phase changes and dependent on meteorological parameters such as surface temperature, relative humidity, wind speed etc. There is a sharp difference between the ocean surface and the land surface. Recently, many studies related to the SLHF anomalies prior to earthquakes have been developed. It has been shown that the energy exchange enhanced between coastal surface and atmosphere prior to earthquakes can increase the rate of the water-heat exchange, which will lead to an obviously increases in SLHF. In this paper, two earthquakes in 2010 (Haiti earthquake and southwest of Sumatra in Indonesia earthquake) have been analyzed using SLHF data by STD (standard deviation) threshold method. It is shows that the SLHF anomaly may occur in interpolate earthquakes or intraplate earthquakes and coastal earthquakes or island earthquakes. And the SLHF anomalies usually appear 5-6 days prior to an earthquake, then disappear quickly after the event. The process of anomaly evolution to a certain extent reflects a dynamic energy change process about earthquake preparation, that is, weak-strong-weak-disappeared.

  6. Transportations Systems Modeling and Applications in Earthquake Engineering

    DTIC Science & Technology

    2010-07-01

    49 Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g)............... 50...Memphis, Tennessee. The NMSZ was responsible for the devastating 1811-1812 New Madrid earthquakes , the largest earthquakes ever recorded in the...Figure 6 PGA map of a M7.7 earthquake on all three New Madrid fault segments (g) Table 1 Fragility parameters for MSC steel bridge (Padgett 2007

  7. On the of neural modeling of some dynamic parameters of earthquakes and fire safety in high-rise construction

    NASA Astrophysics Data System (ADS)

    Haritonova, Larisa

    2018-03-01

    The recent change in the correlation of the number of man-made and natural catastrophes is presented in the paper. Some recommendations are proposed to increase the firefighting efficiency in the high-rise buildings. The article analyzes the methodology of modeling seismic effects. The prospectivity of applying the neural modeling and artificial neural networks to analyze a such dynamic parameters of the earthquake foci as the value of dislocation (or the average rupture slip) is shown. The following two input signals were used: the power class and the number of earthquakes. The regression analysis has been carried out for the predicted results and the target outputs. The equations of the regression for the outputs and target are presented in the work as well as the correlation coefficients in training, validation, testing, and the total (All) for the network structure 2-5-5-1for the average rupture slip. The application of the results obtained in the article for the seismic design for the newly constructed buildings and structures and the given recommendations will provide the additional protection from fire and earthquake risks, reduction of their negative economic and environmental consequences.

  8. Investigation of atmospheric anomalies associated with Kashmir and Awaran Earthquakes

    NASA Astrophysics Data System (ADS)

    Mahmood, Irfan; Iqbal, Muhammad Farooq; Shahzad, Muhammad Imran; Qaiser, Saddam

    2017-02-01

    The earthquake precursors' anomalies at diverse elevation ranges over the seismogenic region and prior to the seismic events are perceived using Satellite Remote Sensing (SRS) techniques and reanalysis datasets. In the current research, seismic precursors are obtained by analyzing anomalies in Outgoing Longwave Radiation (OLR), Air Temperature (AT), and Relative Humidity (RH) before the two strong Mw>7 earthquakes in Pakistan occurred on 8th October 2005 in Azad Jammu Kashmir with Mw 7.6, and 24th September 2013 in Awaran, Balochistan with Mw 7.7. Multi-parameter data were computed based on multi-year background data for anomalies computation. Results indicate significant transient variations in observed parameters before the main event. Detailed analysis suggests presence of pre-seismic activities one to three weeks prior to the main earthquake event that vanishes after the event. These anomalies are due to increase in temperature after release of gases and physical and chemical interactions on earth surface before the earthquake. The parameter variations behavior for both Kashmir and Awaran earthquake events are similar to other earthquakes in different regions of the world. This study suggests that energy release is not concentrated to a single fault but instead is released along the fault zone. The influence of earthquake events on lightning were also investigated and it was concluded that there is a significant atmospheric lightning activity after the earthquake suggesting a strong possibility for an earthquake induced thunderstorm. This study is valuable for identifying earthquake precursors especially in earthquake prone areas.

  9. Radon observations as an integrated part of the multi parameter approach to study pre-earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Lee, Lou; Giuliani, Guachino; Fu, Ching-Chou; Liu, Tiger; Hattori, Katsumi

    2017-04-01

    This work is part of international project to study the complex chain of interactions Lithosphere - Atmosphere -Ionosphere (LAI) in presence of ionization in atmosphere loaded by radon and other gases and is supported by International Space Science Institute (ISSI) in Bern and Beijing. We are presenting experimental measurements and theoretical estimates showing that radon measurements recorded before large earthquake are correlated with release of the heat flux in atmosphere during ionization of the atmospheric boundary layer .The recorded anomalous heat (observed by the remote sounding -infrared radiometers installed on satellites) are followed also by ionospheric anomalies (observed by GPS/TEC, ionosonde or satellite instruments). As ground proof we are using radon measurements installed and coordinated in four different seismic active regions California, Taiwan, Italy and Japan. Radon measurements are performed indirectly by means of gamma ray spectrometry of its radioactive progenies 214Pb and 214Bi (emitted at 351 keV and 609 keV, respectively) and also by Alfa detectors. We present data of five physical parameters- radon, seismicity, temperature of the atmosphere boundary layer, outgoing earth infrared radiation and GPS/TEC and their temporal and spatial variations several days before the onset of the following recent earthquakes: (1) 2016 M6.6 in California; (2) 2016 Amatrice-Norcia (Central Italy), (3) 2016 M6.4 of Feb 06 in Taiwan and (4) 2016 M7.0 of Nov 21 in Japan. Our preliminary results of simultaneous analysis of radon and space measurements in California, Italy, Taiwan and Japan suggests that pre-earthquake phase follows a general temporal-spatial evolution pattern in which radon plays a critical role in understanding the LAI coupling. This pattern could be reviled only with multi instruments observations and been seen and in other large earthquakes worldwide.

  10. Constraints on the source parameters of low-frequency earthquakes on the San Andreas Fault

    USGS Publications Warehouse

    Thomas, Amanda M.; Beroza, Gregory C.; Shelly, David R.

    2016-01-01

    Low-frequency earthquakes (LFEs) are small repeating earthquakes that occur in conjunction with deep slow slip. Like typical earthquakes, LFEs are thought to represent shear slip on crustal faults, but when compared to earthquakes of the same magnitude, LFEs are depleted in high-frequency content and have lower corner frequencies, implying longer duration. Here we exploit this difference to estimate the duration of LFEs on the deep San Andreas Fault (SAF). We find that the M ~ 1 LFEs have typical durations of ~0.2 s. Using the annual slip rate of the deep SAF and the average number of LFEs per year, we estimate average LFE slip rates of ~0.24 mm/s. When combined with the LFE magnitude, this number implies a stress drop of ~104 Pa, 2 to 3 orders of magnitude lower than ordinary earthquakes, and a rupture velocity of 0.7 km/s, 20% of the shear wave speed. Typical earthquakes are thought to have rupture velocities of ~80–90% of the shear wave speed. Together, the slow rupture velocity, low stress drops, and slow slip velocity explain why LFEs are depleted in high-frequency content relative to ordinary earthquakes and suggest that LFE sources represent areas capable of relatively higher slip speed in deep fault zones. Additionally, changes in rheology may not be required to explain both LFEs and slow slip; the same process that governs the slip speed during slow earthquakes may also limit the rupture velocity of LFEs.

  11. Variable anelastic attenuation and site effect in estimating source parameters of various major earthquakes including M w 7.8 Nepal and M w 7.5 Hindu kush earthquake by using far-field strong-motion data

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit

    2017-10-01

    Strong-motion records of recent Gorkha Nepal earthquake ( M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor ( Q β) at recording site. The frequency-dependent Q β( f) = 124 f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.

  12. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  13. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  14. Inferring rate and state friction parameters from a rupture model of the 1995 Hyogo-ken Nanbu (Kobe) Japan earthquake

    USGS Publications Warehouse

    Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.

    2001-01-01

    We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.

  15. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic

  16. Estimating the Maximum Magnitude of Induced Earthquakes With Dynamic Rupture Simulations

    NASA Astrophysics Data System (ADS)

    Gilmour, E.; Daub, E. G.

    2017-12-01

    Seismicity in Oklahoma has been sharply increasing as the result of wastewater injection. The earthquakes, thought to be induced from changes in pore pressure due to fluid injection, nucleate along existing faults. Induced earthquakes currently dominate central and eastern United States seismicity (Keranen et al. 2016). Induced earthquakes have only been occurring in the central US for a short time; therefore, too few induced earthquakes have been observed in this region to know their maximum magnitude. The lack of knowledge regarding the maximum magnitude of induced earthquakes means that large uncertainties exist in the seismic hazard for the central United States. While induced earthquakes follow the Gutenberg-Richter relation (van der Elst et al. 2016), it is unclear if there are limits to their magnitudes. An estimate of the maximum magnitude of the induced earthquakes is crucial for understanding their impact on seismic hazard. While other estimates of the maximum magnitude exist, those estimates are observational or statistical, and cannot take into account the possibility of larger events that have not yet been observed. Here, we take a physical approach to studying the maximum magnitude based on dynamic ruptures simulations. We run a suite of two-dimensional ruptures simulations to physically determine how ruptures propagate. The simulations use the known parameters of principle stress orientation and rupture locations. We vary the other unknown parameters of the ruptures simulations to obtain a large number of rupture simulation results reflecting different possible sets of parameters, and use these results to train a neural network to complete the ruptures simulations. Then using a Markov Chain Monte Carlo method to check different combinations of parameters, the trained neural network is used to create synthetic magnitude-frequency distributions to compare to the real earthquake catalog. This method allows us to find sets of parameters that are

  17. Relationships of earthquakes (and earthquake-associated mass movements) and polar motion as determined by Kalman filtered, Very-Long-Baseline-Interferometry

    NASA Technical Reports Server (NTRS)

    Preisig, Joseph Richard Mark

    1988-01-01

    A Kalman filter was designed to yield optimal estimates of geophysical parameters from Very Long Baseline Interferometry (VLBI) group delay data. The geophysical parameters are the polar motion components, adjustments to nutation in obliquity and longitude, and a change in the length of day parameter. The VLBI clock (and clock rate) parameters and atmospheric zenith delay parameters are estimated simultaneously. Filter background is explained. The IRIS (International Radio Interferometric Surveying) VLBI data are Kalman filtered. The resulting polar motion estimates are examined. There are polar motion signatures at the times of three large earthquakes occurring in 1984 to 1986: Mexico, 19 September, 1985 (Magnitude M sub s = 8.1); Chile, 3 March, 1985 (M sub s = 7.8); and Taiwan, 14 November, 1986 (M sub s = 7.8). Breaks in polar motion occurring about 20 days after the earthquakes appear to correlate well with the onset of increased regional seismic activity and a return to more normal seismicity (respectively). While the contribution of these three earthquakes to polar motion excitations is small, the cumulative excitation due to earthquakes, or seismic phenomena over a Chandler wobble damping period may be significant. Mechanisms for polar motion excitation due to solid earth phenomena are examined. Excitation functions are computed, but the data spans are too short to draw conclusions based on these data.

  18. Variations in the Parameters of Background Seismic Noise during the Preparation Stages of Strong Earthquakes in the Kamchatka Region

    NASA Astrophysics Data System (ADS)

    Kasimova, V. A.; Kopylova, G. N.; Lyubushin, A. A.

    2018-03-01

    The results of the long (2011-2016) investigation of background seismic noise (BSN) in Kamchatka by the method suggested by Doct. Sci. (Phys.-Math.) A.A. Lyubushin with the use of the data from the network of broadband seismic stations of the Geophysical Survey of the Russian Academy of Sciences are presented. For characterizing the BSN field and its variability, continuous time series of the statistical parameters of the multifractal singularity spectra and wavelet expansion calculated from the records at each station are used. These parameters include the generalized Hurst exponent α*, singularity spectrum support width Δα, wavelet spectral exponent β, minimal normalized entropy of wavelet coefficients En, and spectral measure of their coherent behavior. The peculiarities in the spatiotemporal distribution of the BSN parameters as a probable response to the earthquakes with M w = 6.8-8.3 that occurred in Kamchatka in 2013 and 2016 are considered. It is established that these seismic events were preceded by regular variations in the BSN parameters, which lasted for a few months and consisted in the reduction of the median and mean α*, Δα, and β values estimated over all the stations and in the increase of the En values. Based on the increase in the spectral measure of the coherent behavior of the four-variate time series of the median and mean values of the considered statistics, the effect of the enhancement of the synchronism in the joint (collective) behavior of these parameters during a certain period prior to the mantle earthquake in the Sea of Okhotsk (May 24, 2013, M w = 8.3) is diagnosed. The procedures for revealing the precursory effects in the variations of the BSN parameters are described and the examples of these effects are presented.

  19. Earthquake ground motion: Chapter 3

    USGS Publications Warehouse

    Luco, Nicolas; Kircher, Charles A.; Crouse, C. B.; Charney, Finley; Haselton, Curt B.; Baker, Jack W.; Zimmerman, Reid; Hooper, John D.; McVitty, William; Taylor, Andy

    2016-01-01

    Most of the effort in seismic design of buildings and other structures is focused on structural design. This chapter addresses another key aspect of the design process—characterization of earthquake ground motion into parameters for use in design. Section 3.1 describes the basis of the earthquake ground motion maps in the Provisions and in ASCE 7 (the Standard). Section 3.2 has examples for the determination of ground motion parameters and spectra for use in design. Section 3.3 describes site-specific ground motion requirements and provides example site-specific design and MCER response spectra and example values of site-specific ground motion parameters. Section 3.4 discusses and provides an example for the selection and scaling of ground motion records for use in various types of response history analysis permitted in the Standard.

  20. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  1. Inverting the parameters of an earthquake-ruptured fault with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Ting-To; Fernàndez, Josè; Rundle, John B.

    1998-03-01

    Natural selection is the spirit of the genetic algorithm (GA): by keeping the good genes in the current generation, thereby producing better offspring during evolution. The crossover function ensures the heritage of good genes from parent to offspring. Meanwhile, the process of mutation creates a special gene, the character of which does not exist in the parent generation. A program based on genetic algorithms using C language is constructed to invert the parameters of an earthquake-ruptured fault. The verification and application of this code is shown to demonstrate its capabilities. It is determined that this code is able to find the global extreme and can be used to solve more practical problems with constraints gathered from other sources. It is shown that GA is superior to other inverting schema in many aspects. This easy handling and yet powerful algorithm should have many suitable applications in the field of geosciences.

  2. Earthquake chemical precursors in groundwater: a review

    NASA Astrophysics Data System (ADS)

    Paudel, Shukra Raj; Banjara, Sushant Prasad; Wagle, Amrita; Freund, Friedemann T.

    2018-03-01

    We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquake prediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquake prediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

  3. SITE AMPLIFICATION OF EARTHQUAKE GROUND MOTION.

    USGS Publications Warehouse

    Hays, Walter W.

    1986-01-01

    When analyzing the patterns of damage in an earthquake, physical parameters of the total earthquake-site-structure system are correlated with the damage. Soil-structure interaction, the cause of damage in many earthquakes, involves the frequency-dependent response of both the soil-rock column and the structure. The response of the soil-rock column (called site amplification) is controversial because soil has strain-dependent properties that affect the way the soil column filters the input body and surface seismic waves, modifying the amplitude and phase spectra and the duration of the surface ground motion.

  4. Application of τc*Pd in earthquake early warning

    NASA Astrophysics Data System (ADS)

    Huang, Po-Lun; Lin, Ting-Li; Wu, Yih-Min

    2015-03-01

    Rapid assessment of damage potential and size of an earthquake at the station is highly demanded for onsite earthquake early warning. We study the application of τc*Pd for its estimation on the earthquake size using 123 events recorded by the borehole stations of KiK-net in Japan. The new type of earthquake size determined by τc*Pd is more related to the damage potential. We find that τc*Pd provides another parameter to measure the size of earthquake and the threshold to warn strong ground motion.

  5. Application of τc*Pd for identifying damaging earthquakes for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Huang, P. L.; Lin, T. L.; Wu, Y. M.

    2014-12-01

    Earthquake Early Warning System (EEWS) is an effective approach to mitigate earthquake damage. In this study, we used the seismic record by the Kiban Kyoshin network (KiK-net), because it has dense station coverage and co-located borehole strong-motion seismometers along with the free-surface strong-motion seismometers. We used inland earthquakes with moment magnitude (Mw) from 5.0 to 7.3 between 1998 and 2012. We choose 135 events and 10950 strong ground accelerograms recorded by the 696 strong ground accelerographs. Both the free-surface and the borehole data are used to calculate τc and Pd, respectively. The results show that τc*Pd has a good correlation with PGV and is a robust parameter for assessing the potential of damaging earthquake. We propose the value of τc*Pd determined from seconds after the arrival of P wave could be a threshold for the on-site type of EEW.

  6. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    NASA Astrophysics Data System (ADS)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  7. Multivariate statistical analysis to investigate the subduction zone parameters favoring the occurrence of giant megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Brizzi, S.; Sandri, L.; Funiciello, F.; Corbi, F.; Piromallo, C.; Heuret, A.

    2018-03-01

    The observed maximum magnitude of subduction megathrust earthquakes is highly variable worldwide. One key question is which conditions, if any, favor the occurrence of giant earthquakes (Mw ≥ 8.5). Here we carry out a multivariate statistical study in order to investigate the factors affecting the maximum magnitude of subduction megathrust earthquakes. We find that the trench-parallel extent of subduction zones and the thickness of trench sediments provide the largest discriminating capability between subduction zones that have experienced giant earthquakes and those having significantly lower maximum magnitude. Monte Carlo simulations show that the observed spatial distribution of giant earthquakes cannot be explained by pure chance to a statistically significant level. We suggest that the combination of a long subduction zone with thick trench sediments likely promotes a great lateral rupture propagation, characteristic of almost all giant earthquakes.

  8. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  9. Source Parameters from Full Moment Tensor Inversions of Potentially Induced Earthquakes in Western Canada

    NASA Astrophysics Data System (ADS)

    Wang, R.; Gu, Y. J.; Schultz, R.; Kim, A.; Chen, Y.

    2015-12-01

    During the past four years, the number of earthquakes with magnitudes greater than three has substantially increased in the southern section of Western Canada Sedimentary Basin (WCSB). While some of these events are likely associated with tectonic forces, especially along the foothills of the Canadian Rockies, a significant fraction occurred in previously quiescent regions and has been linked to waste water disposal or hydraulic fracturing. A proper assessment of the origin and source properties of these 'induced earthquakes' requires careful analyses and modeling of regional broadband data, which steadily improved during the past 8 years due to recent establishments of regional broadband seismic networks such as CRANE, RAVEN and TD. Several earthquakes, especially those close to fracking activities (e.g. Fox creek town, Alberta) are analyzed. Our preliminary full moment tensor inversion results show maximum horizontal compressional orientations (P-axis) along the northeast-southwest orientation, which agree with the regional stress directions from borehole breakout data and the P-axis of historical events. The decomposition of those moment tensors shows evidence of strike-slip mechanism with near vertical fault plane solutions, which are comparable to the focal mechanisms of injection induced earthquakes in Oklahoma. Minimal isotropic components have been observed, while a modest percentage of compensated-linear-vector-dipole (CLVD) components, which have been linked to fluid migraition, may be required to match the waveforms. To further evaluate the non-double-couple components, we compare the outcomes of full, deviatoric and pure double couple (DC) inversions using multiple frequency ranges and phases. Improved location and depth information from a novel grid search greatly assists the identification and classification of earthquakes in potential connection with fluid injection or extraction. Overall, a systematic comparison of the source attributes of

  10. Early Warning for Large Magnitude Earthquakes: Is it feasible?

    NASA Astrophysics Data System (ADS)

    Zollo, A.; Colombelli, S.; Kanamori, H.

    2011-12-01

    The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude

  11. Source mechanisms and source parameters of March 10 and September 13, 2007, United Arab Emirates Earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzooqi, Y A; Abou Elenean, K M; Megahed, A S

    2008-02-29

    On March 10 and September 13, 2007 two felt earthquakes with moment magnitudes 3.66 and 3.94 occurred in the eastern part of United Arab Emirates (UAE). The two events were accompanied by few smaller events. Being well recorded by the digital UAE and Oman digital broadband stations, they provide us an excellent opportunity to study the tectonic process and present day stress field acting on this area. In this study, we determined the focal mechanisms of the two main shocks by two methods (polarities of P and regional waveform inversion). Our results indicate a normal faulting mechanism with slight strikemore » slip component for the two studied events along a fault plane trending NNE-SSW in consistent a suggested fault along the extension of the faults bounded Bani Hamid area. The Seismicity distribution between two earthquake sequences reveals a noticeable gap that may be a site of a future event. The source parameters (seismic moment, moment magnitude, fault radius, stress drop and displacement across the fault) were also estimated based on the far field displacement spectra and interpreted in the context of the tectonic setting.« less

  12. Source Parameters Inversion of the 2012 LA VEGA Colombia mw 7.2 Earthquake Using Near-Regional Waveform Data

    NASA Astrophysics Data System (ADS)

    Pedraza, P.; Poveda, E.; Blanco Chia, J. F.; Zahradnik, J.

    2013-05-01

    On September 30th, 2012, an earthquake of magnitude Mw 7.2 occurred at the depth of ~170km in the southeast of Colombia. This seismic event is associated to the Nazca plate drifting eastward relative the South America plate. The distribution of seismicity obtained by the National Seismological Network of Colombia (RSNC) since 1993 shows a segmented subduction zone with varying dip angles. The earthquake occurred in a seismic gap zone of intermediate depth. The recent deployment of broadband seismic stations on the Colombian, as a part of the Colombian Seismological Network, operated by the Colombian Survey, has provided high-quality data to study rupture process. We estimated the moment tensor, the centroid position, and the source time function. The parameters were obtained by inverting waveforms recorded by RSNC at distances 100 km to 800 km, and modeled at 0.01-0.09Hz, using different 1D crustal models, taking advantage of the ISOLA code. The DC-percentage of the earthquake is very high (~90%). The focal mechanism is mostly normal, hence the determination of the fault plane is challenging. An attempt to determine the fault plane was made based on mutual relative position of the centroid and hypocenter (H-C method). Studies in progress are devoted to searching possible complexity of the fault rupture process (total duration of about 15 seconds), quantified by multiple-point source models.

  13. Detection of co-seismic earthquake gravity field signals using GRACE-like mission simulations

    NASA Astrophysics Data System (ADS)

    Sharifi, Mohammad Ali; Shahamat, Abolfazl

    2017-05-01

    After launching the GRACE satellite mission in 2002, the earth's gravity field and its temporal variations are measured with a closer inspection. Although these variations are mainly because of the mass transfer of land water storage, they can also happen due to mass movements related to some natural phenomena including earthquakes, volcanic eruptions, melting of polar ice caps and glacial isostatic adjustment. Therefore this paper shows which parameters of an earthquake are more sensitive to GRACE-Like satellite missions. For this purpose, the parameters of the Maule earthquake that occurred in recent years and Alaska earthquake that occurred in 1964 have been chosen. Then we changed their several parameters to serve our purpose. The GRACE-Like sensitivity is observed by using the simulation of the earthquakes along with gravity changes they caused, as well as using dislocation theory under a half space earth. This observation affects the various faulting parameters which include fault length, width, depth and average slip. These changes were therefore evaluated and the result shows that the GRACE satellite missions tend to be more sensitive to Width among the Length and Width, the other parameter is Dip variations than other parameters. This article can be useful to the upcoming scenario designers and seismologists in their quest to study fault parameters.

  14. Earthquake Loss Scenarios in the Himalayas

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Gupta, S.; Rosset, P.; Chamlagain, D.

    2017-12-01

    We estimate quantitatively that in repeats of the 1555 and 1505 great Himalayan earthquakes the fatalities may range from 51K to 549K, the injured from 157K to 1,700K and the strongly affected population (Intensity≥VI) from 15 to 75 million, depending on the details of the assumed earthquake parameters. For up-dip ruptures in the stressed segments of the M7.8 Gorkha 2015, the M7.9 Subansiri 1947 and the M7.8 Kangra 1905 earthquakes, we estimate 62K, 100K and 200K fatalities, respectively. The numbers of strongly affected people we estimate as 8, 12, 33 million, in these cases respectively. These loss calculations are based on verifications of the QLARM algorithms and data set in the cases of the M7.8 Gorkha 2015, the M7.8 Kashmir 2005, the M6.6 Chamoli 1999, the M6.8 Uttarkashi 1991 and the M7.8 Kangra 1905 earthquakes. The requirement of verification that was fulfilled in these test cases was that the reported intensity field and the fatality count had to match approximately, using the known parameters of the earthquakes. The apparent attenuation factor was a free parameter and ranged within acceptable values. Numbers for population were adjusted for the years in question from the latest census. The hour of day was assumed to be at night with maximum occupation. The assumption that the upper half of the Main Frontal Thrust (MFT) will rupture in companion earthquakes to historic earthquakes in the down-dip half is based on the observations of several meters of displacement in trenches across the MFT outcrop. Among mitigation measures awareness with training and adherence to construction codes rank highest. Retrofitting of schools and hospitals would save lives and prevent injuries. Preparation plans for helping millions of strongly affected people should be put in place. These mitigation efforts should focus on an approximately 7 km wide strip along the MFT on the up-thrown side because the strong motions are likely to be doubled. We emphasize that our estimates

  15. Earthquake source properties from pseudotachylite

    USGS Publications Warehouse

    Beeler, Nicholas M.; Di Toro, Giulio; Nielsen, Stefan

    2016-01-01

    The motions radiated from an earthquake contain information that can be interpreted as displacements within the source and therefore related to stress drop. Except in a few notable cases, the source displacements can neither be easily related to the absolute stress level or fault strength, nor attributed to a particular physical mechanism. In contrast paleo-earthquakes recorded by exhumed pseudotachylite have a known dynamic mechanism whose properties constrain the co-seismic fault strength. Pseudotachylite can also be used to directly address a longstanding discrepancy between seismologically measured static stress drops, which are typically a few MPa, and much larger dynamic stress drops expected from thermal weakening during localized slip at seismic speeds in crystalline rock [Sibson, 1973; McKenzie and Brune, 1969; Lachenbruch, 1980; Mase and Smith, 1986; Rice, 2006] as have been observed recently in laboratory experiments at high slip rates [Di Toro et al., 2006a]. This note places pseudotachylite-derived estimates of fault strength and inferred stress levels within the context and broader bounds of naturally observed earthquake source parameters: apparent stress, stress drop, and overshoot, including consideration of roughness of the fault surface, off-fault damage, fracture energy, and the 'strength excess'. The analysis, which assumes stress drop is related to corner frequency by the Madariaga [1976] source model, is restricted to the intermediate sized earthquakes of the Gole Larghe fault zone in the Italian Alps where the dynamic shear strength is well-constrained by field and laboratory measurements. We find that radiated energy exceeds the shear-generated heat and that the maximum strength excess is ~16 MPa. More generally these events have inferred earthquake source parameters that are rate, for instance a few percent of the global earthquake population has stress drops as large, unless: fracture energy is routinely greater than existing models allow

  16. Visible Earthquakes: a web-based tool for visualizing and modeling InSAR earthquake data

    NASA Astrophysics Data System (ADS)

    Funning, G. J.; Cockett, R.

    2012-12-01

    InSAR (Interferometric Synthetic Aperture Radar) is a technique for measuring the deformation of the ground using satellite radar data. One of the principal applications of this method is in the study of earthquakes; in the past 20 years over 70 earthquakes have been studied in this way, and forthcoming satellite missions promise to enable the routine and timely study of events in the future. Despite the utility of the technique and its widespread adoption by the research community, InSAR does not feature in the teaching curricula of most university geoscience departments. This is, we believe, due to a lack of accessibility to software and data. Existing tools for the visualization and modeling of interferograms are often research-oriented, command line-based and/or prohibitively expensive. Here we present a new web-based interactive tool for comparing real InSAR data with simple elastic models. The overall design of this tool was focused on ease of access and use. This tool should allow interested nonspecialists to gain a feel for the use of such data and greatly facilitate integration of InSAR into upper division geoscience courses, giving students practice in comparing actual data to modeled results. The tool, provisionally named 'Visible Earthquakes', uses web-based technologies to instantly render the displacement field that would be observable using InSAR for a given fault location, geometry, orientation, and slip. The user can adjust these 'source parameters' using a simple, clickable interface, and see how these affect the resulting model interferogram. By visually matching the model interferogram to a real earthquake interferogram (processed separately and included in the web tool) a user can produce their own estimates of the earthquake's source parameters. Once satisfied with the fit of their models, users can submit their results and see how they compare with the distribution of all other contributed earthquake models, as well as the mean and median

  17. Prediction of Strong Earthquake Ground Motion for the M=7.4 and M=7.2 1999, Turkey Earthquakes based upon Geological Structure Modeling and Local Earthquake Recordings

    NASA Astrophysics Data System (ADS)

    Gok, R.; Hutchings, L.

    2004-05-01

    We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

  18. Connecting slow earthquakes to huge earthquakes.

    PubMed

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  19. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one

  20. PAGER-CAT: A composite earthquake catalog for calibrating global fatality models

    USGS Publications Warehouse

    Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.

    2009-01-01

    We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are

  1. Using Groundwater physiochemical properties for assessing potential earthquake precursor

    NASA Astrophysics Data System (ADS)

    Inbar, Nimrod; Reuveni, Yuval; Anker, Yaakov; Guttman, Joseph

    2017-04-01

    Worldwide studies reports pre-seismic, co-seismic and post-seismic reaction of groundwater to earthquakes. The unique hydrological and geological situation in Israel resulted in relatively deep water wells which are located close to seismically active tectonic plate boundary. Moreover, the Israeli experience show that anomalies may occurs 60-90 minutes prior to the seismic event (Guttman et al., 2005; Anker et al., 2016). Here, we try to assess the possible connection between changes in physiochemical parameters of groundwater and earthquakes along the Dead Sea Transform (DST) region. A designated network of monitoring stations was installed in MEKOROT abandoned deep water wells, continuously measuring water table, conductivity and temperature at a sampling rate of 1 minute. Preliminary analysis compares changes in the measured parameters with rain events, tidal effects and earthquake occurrences of all measured magnitudes (>2.5Md) at monitoring area surroundings. The acquired data set over one year recorded simultaneous abrupt changes in several wells which seems disconnected from standard hydrological occurrences such as precipitation, abstraction or tidal effects. At this stage, our research aims to determine and rationalize a baseline for "normal response" of the measured parameters to external occurrences while isolating those cases in which "deviations" from that base line is recorded. We apply several analysis techniques both in time and frequency domain with the measured signal as well as statistical analysis of several measured earthquake parameters, which indicate potential correlations between earthquakes occurrences and the measured signal. We show that at least in one seismic event (5.1 Md) a potential precursor may have been recorded. Reference: Anker, Y., N. Inbar, A. Y. Dror, Y. Reuveni, J. Guttman, A. Flexer, (2016). Groundwater response to ground movements, as a tool for earthquakes monitoring and a possible precursor. 8th International Conference

  2. The source parameters, surface deformation and tectonic setting of three recent earthquakes: thessalonki (Greece), tabas-e-golshan (iran) and carlisle (u.k.).

    PubMed

    King, G; Soufleris, C; Berberian, M

    1981-03-01

    Abstract- Three earthquakes have been studied. These are the Thessaloniki earthquake of 20th June 1978 (Ms = 6.4, Normal faulting), the Tabase-Golshan earthquake of 16th September 1978 (Ms = 7.7 Thrust faulting) and the Carlisle earth-quake of 26th December 1979 (Mb = 5.0, Thrust faulting). The techniques employed to determine source parameters included field studies of SUP face deformation, fault breaks, locations of locally recorded aftershocks and teleseismic studies including joint hypocentral location, first motion methods and waveform modelling. It is clear that these techniques applied together provide more information than the same methods used separately. The moment of the Thessaloniki earthquake determined teleseismically (Force moment 5.2 times 10(25) dyne cm. Geometric moment 1.72 times 10(8) m(3) ) is an order of magnitude greater than that determined using field data (surface ruptures and aftershock depths) (Force moment 4.5 times 10(24) dyne cm. Geometric moment 0.16 times 10(8) m(3) ). It is concluded that for this earthquake the surface rupture only partly reflects the processes on the main rupture plane. This view i s supported by a distribution of aftershocks and damage which extends well outside the region of ground rupture. However, the surface breaks consistently have the same slip vector direction as the fault plane solutions suggesting that they are in this respect related to to the main faulting and are not superficial slumping. Both field studies and waveform studies suggest a low stress drop which may explain the relatively little damage and loss of life as a result of the Thessaloniki earthquake. In contrast, the teleseismic moment of the Tabas-e-Golshan earthquake (Force moment 4.4 times 10(26) dyne cm. Geometric moment 1.5 times 10(9) m(3) ) is similar t o that determined from field studies (Force moment 10.2 times 10(26) dyne cm. Geometric moment 3.4 times 10(9) m(3) ) and the damage and after-shock distributions clearly relate to the

  3. Constraining Earthquake Source Parameters in Rupture Patches and Rupture Barriers on Gofar Transform Fault, East Pacific Rise from Ocean Bottom Seismic Data

    NASA Astrophysics Data System (ADS)

    Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.

    2015-12-01

    On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an

  4. The Richter scale: its development and use for determining earthquake source parameters

    USGS Publications Warehouse

    Boore, D.M.

    1989-01-01

    The ML scale, introduced by Richter in 1935, is the antecedent of every magnitude scale in use today. The scale is defined such that a magnitude-3 earthquake recorded on a Wood-Anderson torsion seismometer at a distance of 100 km would write a record with a peak excursion of 1 mm. To be useful, some means are needed to correct recordings to the standard distance of 100 km. Richter provides a table of correction values, which he terms -log Ao, the latest of which is contained in his 1958 textbook. A new analysis of over 9000 readings from almost 1000 earthquakes in the southern California region was recently completed to redetermine the -log Ao values. Although some systematic differences were found between this analysis and Richter's values (such that using Richter's values would lead to underand overestimates of ML at distances less than 40 km and greater than 200 km, respectively), the accuracy of his values is remarkable in view of the small number of data used in their determination. Richter's corrections for the distance attenuation of the peak amplitudes on Wood-Anderson seismographs apply only to the southern California region, of course, and should not be used in other areas without first checking to make sure that they are applicable. Often in the past this has not been done, but recently a number of papers have been published determining the corrections for other areas. If there are significant differences in the attenuation within 100 km between regions, then the definition of the magnitude at 100 km could lead to difficulty in comparing the sizes of earthquakes in various parts of the world. To alleviate this, it is proposed that the scale be defined such that a magnitude 3 corresponds to 10 mm of motion at 17 km. This is consistent both with Richter's definition of ML at 100 km and with the newly determined distance corrections in the southern California region. Aside from the obvious (and original) use as a means of cataloguing earthquakes according

  5. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model

  6. The USGS Earthquake Notification Service (ENS): Customizable notifications of earthquakes around the globe

    USGS Publications Warehouse

    Wald, Lisa A.; Wald, David J.; Schwarz, Stan; Presgrave, Bruce; Earle, Paul S.; Martinez, Eric; Oppenheimer, David

    2008-01-01

    At the beginning of 2006, the U.S. Geological Survey (USGS) Earthquake Hazards Program (EHP) introduced a new automated Earthquake Notification Service (ENS) to take the place of the National Earthquake Information Center (NEIC) "Bigquake" system and the various other individual EHP e-mail list-servers for separate regions in the United States. These included northern California, southern California, and the central and eastern United States. ENS is a "one-stop shopping" system that allows Internet users to subscribe to flexible and customizable notifications for earthquakes anywhere in the world. The customization capability allows users to define the what (magnitude threshold), the when (day and night thresholds), and the where (specific regions) for their notifications. Customization is achieved by employing a per-user based request profile, allowing the notifications to be tailored for each individual's requirements. Such earthquake-parameter-specific custom delivery was not possible with simple e-mail list-servers. Now that event and user profiles are in a structured query language (SQL) database, additional flexibility is possible. At the time of this writing, ENS had more than 114,000 subscribers, with more than 200,000 separate user profiles. On a typical day, more than 188,000 messages get sent to a variety of widely distributed users for a wide range of earthquake locations and magnitudes. The purpose of this article is to describe how ENS works, highlight the features it offers, and summarize plans for future developments.

  7. Determination of Source Parameters for Earthquakes in the Northeastern United States and Quebec, Canada by Using Regional Broadband Seismograms

    NASA Astrophysics Data System (ADS)

    Du, W.; Kim, W.; Sykes, L. R.

    2001-05-01

    We studied approximately 20 earthquakes which have occurred in the Northeastern United States and Quebec, southern Canada since 1990. These earthquakes have local magnitude (ML) ranging from 3.5 to 5.2 and are well recorded by broadband seismographic stations in the region. Focal depth and moment tensor of these earthquakes are determined by using waveform inversion technique in which the best fit double-couple mechanism is obtained through a grid search over strike, dip and rake angles. Complete synthetics for three-component displacement signals in the period range 1 to 30 seconds are calculated. In most cases, long period Pnl and surface waves are used to constrain the source parameters. Our results indicate that most of the events show the horizontal compression with near horizontal P axis striking NE-SW. However, three events along the lower St. Lawrence River shows the P axes striking ESE-SE (100-130 degrees) with plunge angles of about 20 degrees. Focal depths of these events range from 2 to 28 km. Four events along the Appalachian Mts. have occurred with 2 to 5 km depths -- Jan. 16, 1994 Reading, Pa sequence, Sep. 25, 1998 Pymatuning, Pa event, Jan. 26, 2001 Ashutabula, Oh earthquake and an event in the Charlevoix seismic zone, Canada (Oct. 28, 1997). Two events have occurred at depth greater than 20 km. These are Quebec City earthquake on Nov. 6, 1997 and Christieville, Quebec event on May 4, 1997. We also observed the apparent discrepancy between the moment magnitude (Mw) and local magnitude (ML). Preliminary results show that for the events studied, Mw tends to be about 0.3 magnitude units smaller than the corresponding ML. However, some events show comparable Mw and ML values, for instance, the 1994 Reading, Pa sequence and Oct. 28, 1997 Charlevoix earthquake. These events have occurred at shallow depths and show low stress drops (less than 100 bars). We believe that this magnitude discrepancy reflects the source characteristics of intraplate events in

  8. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    USGS Publications Warehouse

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  9. Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California

    USGS Publications Warehouse

    Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.

    2000-01-01

    We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the

  10. Coseismic deformation observed with radar interferometry: Great earthquakes and atmospheric noise

    NASA Astrophysics Data System (ADS)

    Scott, Chelsea Phipps

    Spatially dense maps of coseismic deformation derived from Interferometric Synthetic Aperture Radar (InSAR) datasets result in valuable constraints on earthquake processes. The recent increase in the quantity of observations of coseismic deformation facilitates the examination of signals in many tectonic environments associated with earthquakes of varying magnitude. Efforts to place robust constraints on the evolution of the crustal stress field following great earthquakes often rely on knowledge of the earthquake location, the fault geometry, and the distribution of slip along the fault plane. Well-characterized uncertainties and biases strengthen the quality of inferred earthquake source parameters, particularly when the associated ground displacement signals are near the detection limit. Well-preserved geomorphic records of earthquakes offer additional insight into the mechanical behavior of the shallow crust and the kinematics of plate boundary systems. Together, geodetic and geologic observations of crustal deformation offer insight into the processes that drive seismic cycle deformation over a range of timescales. In this thesis, I examine several challenges associated with the inversion of earthquake source parameters from SAR data. Variations in atmospheric humidity, temperature, and pressure at the timing of SAR acquisitions result in spatially correlated phase delays that are challenging to distinguish from signals of real ground deformation. I characterize the impact of atmospheric noise on inferred earthquake source parameters following elevation-dependent atmospheric corrections. I analyze the spatial and temporal variations in the statistics of atmospheric noise from both reanalysis weather models and InSAR data itself. Using statistics that reflect the spatial heterogeneity of atmospheric characteristics, I examine parameter errors for several synthetic cases of fault slip on a basin-bounding normal fault. I show a decrease in uncertainty in fault

  11. The HayWired Earthquake Scenario—Earthquake Hazards

    USGS Publications Warehouse

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  12. Earthquake cycles and physical modeling of the process leading up to a large earthquake

    NASA Astrophysics Data System (ADS)

    Ohnaka, Mitiyasu

    2004-08-01

    A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.

  13. The Differences in Source Dynamics Between Intermediate-Depth and Deep EARTHQUAKES:A Comparative Study Between the 2014 Rat Islands Intermediate-Depth Earthquake and the 2015 Bonin Islands Deep Earthquake

    NASA Astrophysics Data System (ADS)

    Twardzik, C.; Ji, C.

    2015-12-01

    It has been proposed that the mechanisms for intermediate-depth and deep earthquakes might be different. While previous extensive seismological studies suggested that such potential differences do not significantly affect the scaling relationships of earthquake parameters, there has been only a few investigations regarding their dynamic characteristics, especially for fracture energy. In this work, the 2014 Mw7.9 Rat Islands intermediate-depth (105 km) earthquake and the 2015 Mw7.8 Bonin Islands deep (680 km) earthquake are studied from two different perspectives. First, their kinematic rupture models are constrained using teleseismic body waves. Our analysis reveals that the Rat Islands earthquake breaks the entire cold core of the subducting slab defined as the depth of the 650oC isotherm. The inverted stress drop is 4 MPa, compatible to that of intra-plate earthquakes at shallow depths. On the other hand, the kinematic rupture model of the Bonin Islands earthquake, which occurred in a region lacking of seismicity for the past forty years, according to the GCMT catalog, exhibits an energetic rupture within a 35 km by 30 km slip patch and a high stress drop of 24 MPa. It is of interest to note that although complex rupture patterns are allowed to match the observations, the inverted slip distributions of these two earthquakes are simple enough to be approximated as the summation of a few circular/elliptical slip patches. Thus, we investigate subsequently their dynamic rupture models. We use a simple modelling approach in which we assume that the dynamic rupture propagation obeys a slip-weakening friction law, and we describe the distribution of stress and friction on the fault as a set of elliptical patches. We will constrain the three dynamic parameters that are yield stress, background stress prior to the rupture and slip weakening distance, as well as the shape of the elliptical patches directly from teleseismic body waves observations. The study would help us

  14. Earthquake hazard and risk assessment based on Unified Scaling Law for Earthquakes: Greater Caucasus and Crimea

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.; Nekrasova, Anastasia K.

    2018-05-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10 N(M, L) = A + B·(5 - M) + C·log10 L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within a seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g., peak ground acceleration, PGA, or macro-seismic intensity). After a rigorous verification against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory). The methodology of seismic hazard and risk assessment is illustrated by application to the territory of Greater Caucasus and Crimea.

  15. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of Earthquake Predictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.

  16. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth

  17. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  18. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul

    2017-12-01

    This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  19. Intraplate triggered earthquakes: Observations and interpretation

    USGS Publications Warehouse

    Hough, S.E.; Seeber, L.; Armbruster, J.G.

    2003-01-01

    We present evidence that at least two of the three 1811-1812 New Madrid, central United States, mainshocks and the 1886 Charleston, South Carolina, earthquake triggered earthquakes at regional distances. In addition to previously published evidence for triggered earthquakes in the northern Kentucky/southern Ohio region in 1812, we present evidence suggesting that triggered events might have occurred in the Wabash Valley, to the south of the New Madrid Seismic Zone, and near Charleston, South Carolina. We also discuss evidence that earthquakes might have been triggered in northern Kentucky within seconds of the passage of surface waves from the 23 January 1812 New Madrid mainshock. After the 1886 Charleston earthquake, accounts suggest that triggered events occurred near Moodus, Connecticut, and in southern Indiana. Notwithstanding the uncertainty associated with analysis of historical accounts, there is evidence that at least three out of the four known Mw 7 earthquakes in the central and eastern United States seem to have triggered earthquakes at distances beyond the typically assumed aftershock zone of 1-2 mainshock fault lengths. We explore the possibility that remotely triggered earthquakes might be common in low-strain-rate regions. We suggest that in a low-strain-rate environment, permanent, nonelastic deformation might play a more important role in stress accumulation than it does in interplate crust. Using a simple model incorporating elastic and anelastic strain release, we show that, for realistic parameter values, faults in intraplate crust remain close to their failure stress for a longer part of the earthquake cycle than do faults in high-strain-rate regions. Our results further suggest that remotely triggered earthquakes occur preferentially in regions of recent and/or future seismic activity, which suggests that faults are at a critical stress state in only some areas. Remotely triggered earthquakes may thus serve as beacons that identify regions of

  20. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: Altai-Sayan Region

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.; Nekrasova, A.

    2017-12-01

    We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes, USLE, which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10N(M, L) = A + B·(5 - M) + C·log10L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within an seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum credible magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g. peak ground acceleration, PGA, or macro-seismic intensity etc.). After a rigorous testing against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory, etc.). This, USLE based, methodology of seismic hazard and risks assessment is applied to the territory of Altai-Sayan Region, of Russia. The study supported by the Russian Science Foundation Grant No. 15-17-30020.

  1. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given

  2. Prospective Validation of Pre-earthquake Atmospheric Signals and Their Potential for Short–term Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Lee, Lou; Liu, Tiger; Kafatos, Menas

    2015-04-01

    We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. Our challenge question is: "Whether such pre-earthquake atmospheric/ionospheric signals are significant and could be useful for early warning of large earthquakes?" To check the predictive potential of atmospheric pre-earthquake signals we have started to validate anomalous ionospheric / atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (STIR), electron concentration in the ionosphere (GPS/TEC), radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 (2) Prospective testing of STIR anomalies with potential for M5.5+ events. The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show STIR anomalous behavior before all of these events with false negatives close to zero. False alarm ratio for false positives is less then 25%. The initial prospective testing for STIR shows systematic appearance of anomalies in advance (1-30 days) to the M5.5+ events for Taiwan, Kamchatka-Sakhalin (Russia) and Japan. Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be

  3. Losses to single-family housing from ground motions in the 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.; Leyendecker, E.V.; Roth, R.J.; Petersen, M.D.

    2004-01-01

    The distributions of insured losses to single-family housing following the 1994 Northridge, California, earthquake for 234 ZIP codes can be satisfactorily modeled with gamma distributions. Regressions of the parameters in the gamma distribution on estimates of ground motion, derived from ShakeMap estimates or from interpolated observations, provide a basis for developing curves of conditional probability of loss given a ground motion. Comparison of the resulting estimates of aggregate loss with the actual aggregate loss gives satisfactory agreement for several different ground-motion parameters. Estimates of loss based on a deterministic spatial model of the earthquake ground motion, using standard attenuation relationships and NEHRP soil factors, give satisfactory results for some ground-motion parameters if the input ground motions are increased about one and one-half standard deviations above the median, reflecting the fact that the ground motions for the Northridge earthquake tended to be higher than the median ground motion for other earthquakes with similar magnitude. The results give promise for making estimates of insured losses to a similar building stock under future earthquake loading. ?? 2004, Earthquake Engineering Research Institute.

  4. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  5. Comparing methods for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin

    2017-04-01

    There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).

  6. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  7. Earthquakes

    MedlinePlus

    ... Search Term(s): Main Content Home Be Informed Earthquakes Earthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

  8. The use of waveform shapes to automatically determine earthquake focal depth

    USGS Publications Warehouse

    Sipkin, S.A.

    2000-01-01

    Earthquake focal depth is an important parameter for rapidly determining probable damage caused by a large earthquake. In addition, it is significant both for discriminating between natural events and explosions and for discriminating between tsunamigenic and nontsunamigenic earthquakes. For the purpose of notifying emergency management and disaster relief organizations as well as issuing tsunami warnings, potential time delays in determining source parameters are particularly detrimental. We present a method for determining earthquake focal depth that is well suited for implementation in an automated system that utilizes the wealth of broadband teleseismic data that is now available in real time from the global seismograph networks. This method uses waveform shapes to determine focal depth and is demonstrated to be valid for events with magnitudes as low as approximately 5.5.

  9. Discrimination between induced, triggered, and natural earthquakes close to hydrocarbon reservoirs: A probabilistic approach based on the modeling of depletion-induced stress changes and seismological source parameters

    NASA Astrophysics Data System (ADS)

    Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank

    2015-04-01

    Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6

  10. Assessment of environmental and nondestructive earthquake effects on modal parameters of an office building based on long-term vibration measurements

    NASA Astrophysics Data System (ADS)

    Wu, Wen-Hwa; Wang, Sheng-Wei; Chen, Chien-Chou; Lai, Gwolong

    2017-05-01

    Identification for the modal parameters of an instrumented office building using ambient vibration measurements is conducted in this study based on a recently developed stochastic subspace identification methodology equipped with an alternative stabilization diagram and a hierarchical sifting process. The identified results are then deliberately examined to recognize the dynamic features for quite a few dominant modes of this building structure including three pairs of closely-spaced modes. Making use of the collected three-month data including three seismic events, the analyzed results show that the root-mean-square vibration response is directly related to the wind speed and indirectly related to the air temperature under a specific condition. More importantly, it is discovered that the root-mean-square response is the dominant factor to induce the variation of modal parameters. Except for the torsional modes, all the other modal frequencies are highly correlated with the root-mean-square acceleration in a negative manner and the corresponding damping ratios also clearly display a positive correlation. Another crucial observation from this assessment is that the percentages of frequency variation in three months for most of the identified modes go beyond 10%. The effects of three nondestructive earthquakes are further traced to observe the tendencies of reducing the modal frequencies and raising the damping ratios, both with a variation level possibly increasing with the seismic intensity. But different from the effects of environmental factors, the changes in modal parameters caused by nondestructive earthquakes will vanish right after the seismic events.

  11. Examination of Parameters Affecting the House Prices by Multiple Regression Analysis and its Contributions to Earthquake-Based Urban Transformation

    NASA Astrophysics Data System (ADS)

    Denli, H. H.; Durmus, B.

    2016-12-01

    The purpose of this study is to examine the factors which may affect the apartment prices with multiple linear regression analysis models and visualize the results by value maps. The study is focused on a county of Istanbul - Turkey. Totally 390 apartments around the county Umraniye are evaluated due to their physical and locational conditions. The identification of factors affecting the price of apartments in the county with a population of approximately 600k is expected to provide a significant contribution to the apartment market.Physical factors are selected as the age, number of rooms, size, floor numbers of the building and the floor that the apartment is positioned in. Positional factors are selected as the distances to the nearest hospital, school, park and police station. Totally ten physical and locational parameters are examined by regression analysis.After the regression analysis has been performed, value maps are composed from the parameters age, price and price per square meters. The most significant of the composed maps is the price per square meters map. Results show that the location of the apartment has the most influence to the square meter price information of the apartment. A different practice is developed from the composed maps by searching the ability of using price per square meters map in urban transformation practices. By marking the buildings older than 15 years in the price per square meters map, a different and new interpretation has been made to determine the buildings, to which should be given priority during an urban transformation in the county.This county is very close to the North Anatolian Fault zone and is under the threat of earthquakes. By marking the apartments older than 15 years on the price per square meters map, both older and expensive square meters apartments list can be gathered. By the help of this list, the priority could be given to the selected higher valued old apartments to support the economy of the country

  12. Changes in gravitational parameters inferred from time variable GRACE data-A case study for October 2005 Kashmir earthquake

    NASA Astrophysics Data System (ADS)

    Hussain, Matloob; Eshagh, Mehdi; Ahmad, Zulfiqar; Sadiq, M.; Fatolazadeh, Farzam

    2016-09-01

    The earth's gravity changes are attributed to the redistribution of masses within and/or on the surface of the earth, which are due to the frictional sliding, tensile cracking and/or cataclastic flow of rocks along the faults and detectable by earthquake events. Inversely, the gravity changes are useful to describe the earthquake seismicity over the active orogenic belts. The time variable gravimetric data are hardly available to the public domain. However, Gravity Recovery and Climatic Experiment (GRACE) is the only satellite mission dedicated to model the variation of the gravity field and an available source to the science community. Here, we have tried to envisage gravity changes in terms of gravity anomaly (Δg), geoid (N) and the gravity gradients over the Indo-Pak plate with emphasis upon Kashmir earthquake of October 2005. For this purpose, we engaged the spherical harmonic coefficients of monthly gravity solutions from the GRACE satellite mission, which have good coverage over the entire globe with unprecedented accuracy. We have analysed numerically the solutions after removing the hydrological signals, during August to November 2005, in terms of corresponding monthly differentials of gravity anomaly, geoid and the gradients. The regional structures like Main Mantle Thrust (MMT), Main Karakoram Thrust (MKT), Herat and Chaman faults are in closed association with topography and with gravity parameters from the GRACE gravimetry and EGM2008 model. The monthly differentials of these quantities indicate the stress accumulation in the northeast direction in the study area. Our numerical results show that the horizontal gravity gradients seem to be in good agreement with tectonic boundaries and differentials of the gravitational elements are subtle to the redistribution of rock masses and topography caused by 2005 Kashmir earthquake. Moreover, the gradients are rather more helpful for extracting the coseismic gravity signatures caused by seismicity over the area

  13. Radiation efficiency of earthquake sources at different hierarchical levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocharyan, G. G., E-mail: gevorgkidg@mail.ru; Moscow Institute of Physics and Technology

    Such factors as earthquake size and its mechanism define common trends in alteration of radiation efficiency. The macroscopic parameter that controls the efficiency of a seismic source is stiffness of fault or fracture. The regularities of this parameter alteration with scale define several hierarchical levels, within which earthquake characteristics obey different laws. Small variations of physical and mechanical properties of the fault principal slip zone can lead to dramatic differences both in the amplitude of released stress and in the amount of radiated energy.

  14. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  15. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    NASA Astrophysics Data System (ADS)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  16. Space technologies for short-term earthquake warning

    NASA Astrophysics Data System (ADS)

    Pulinets, S.

    Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) appearing several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquake prediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquake prediction. It takes into account the following considerations: Selection of the precursors in the terms of priority, taking into account their statistical and physical parameters Configuration of the spacecraft payload Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule) Proposal of different options (cheap microsatellite or comprehensive multisatellite constellation) Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention will be devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies will be considered.

  17. Space technologies for short-term earthquake warning

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.

    Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) which appear several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquake prediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquake prediction. It takes into account the following: Selection of the precursors in the terms of priority, considering their statistical and physical parameters.Configuration of the spacecraft payload.Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule).Different options of the satellite systems (cheap microsatellite or comprehensive multisatellite constellation). Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention is devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies are considered.

  18. Effects of strong earthquakes in variations of electrical and meteorological parameters of the near-surface atmosphere in Kamchatka region

    NASA Astrophysics Data System (ADS)

    Smirnov, S. E.; Mikhailova, G. A.; Mikhailov, Yu. M.; Kapustina, O. V.

    2017-09-01

    The diurnal variations in electrical (quasistatic electric field and electrical conductivity) and meteorological (temperature, pressure, relative humidity of the atmosphere, and wind speed) parameters, measured simultaneously before strong earthquakes in Kamchatka region (November 15, 2006, M = 8.3; January 13, 2007, M = 8.1; January 30, 2016, M = 7.2), are studied for the first time in detail. It is found that a successively anomalous increase in temperature, despite the negative regular trend in these winter months, was observed in the period of six-seven days before the occurrences of earthquakes. An anomalous temperature increase led to the formation of "winter thunderstorm" conditions in the near-surface atmosphere of Kamchatka region, which was manifested in the appearance of an anomalous, type 2 electrical signal, the amplification of and intensive variations in electrical conductivity, heavy precipitation (snow showers), high relative humidity of air, storm winds, and pressure changes. With the weak flow of natural heat radiation in this season, the observed dynamics of electric and meteorological processes can likely be explained by the appearance of an additional heat source of seismic nature.

  19. Normal Fault Type Earthquakes Off Fukushima Region - Comparison of the 1938 Events and Recent Earthquakes -

    NASA Astrophysics Data System (ADS)

    Murotani, S.; Satake, K.

    2017-12-01

    Off Fukushima region, Mjma 7.4 (event A) and 6.9 (event B) events occurred on November 6, 1938, following the thrust fault type earthquakes of Mjma 7.5 and 7.3 on the previous day. These earthquakes were estimated as normal fault earthquakes by Abe (1977, Tectonophysics). An Mjma 7.0 earthquake occurred on July 12, 2014 near event B and an Mjma 7.4 earthquake occurred on November 22, 2016 near event A. These recent events are the only M 7 class earthquakes occurred off Fukushima since 1938. Except for the two 1938 events, normal fault earthquakes have not occurred until many aftershocks of the 2011 Tohoku earthquake. We compared the observed tsunami and seismic waveforms of the 1938, 2014, and 2016 earthquakes to examine the normal fault earthquakes occurred off Fukushima region. It is difficult to compare the tsunami waveforms of the 1938, 2014 and 2016 events because there were only a few observations at the same station. The teleseismic body wave inversion of the 2016 earthquake yielded with the focal mechanism of strike 42°, dip 35°, and rake -94°. Other source parameters were as follows: source area 70 km x 40 km, average slip 0.2 m, maximum slip 1.2 m, seismic moment 2.2 x 1019 Nm, and Mw 6.8. A large slip area is located near the hypocenter, and it is compatible with the tsunami source area estimated from tsunami travel times. The 2016 tsunami source area is smaller than that of the 1938 event, consistent with the difference in Mw: 7.7 for event A estimated by Abe (1977) and 6.8 for the 2016 event. Although the 2014 epicenter is very close to that of event B, the teleseismic waveforms of the 2014 event are similar to those of event A and the 2016 event. While Abe (1977) assumed that the mechanism of event B was the same as event A, the initial motions at some stations are opposite, indicating that the focal mechanisms of events A and B are different and more detailed examination is needed. The normal fault type earthquake seems to occur following the

  20. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  1. Real-time earthquake monitoring using a search engine method.

    PubMed

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  2. The Occurrence of the Recent Deadly Mexico Earthquakes was not that Unexpected

    NASA Astrophysics Data System (ADS)

    Flores-Marquez, L.; Sarlis, N. V.; Skordas, E. S.; Varotsos, P.; Ramírez-Rojas, A.

    2017-12-01

    Most big Mexican earthquakes occur right along the interface between the colliding Cocos and North American plates, but the two recent deadly Mexico earthquakes, i.e., the magnitude 8.2 earthquake that struck the Mexico's Chiapas state on 7 September 2017 and the magnitude 7.1 earthquake that struck central Mexico, almost 12 days later, killing more than 400 people and reducing buildings to rubble in several States happened at two different spots in the flat-slab in the middle of the Cocos tectonic plate which is considered a geologically surprising area [1]. Here, upon considering a new type of analysis termed natural time, we show that their occurrence should not in principle puzzle scientists. Earthquakes may be considered as critical phenomena, see Ref. [2] and references therein and natural time analysis [3] uncovers an order parameter for seismicity. It has been shown [2] that the fluctuations of this order parameter exhibit a universal behavior with a probability density function (pdf), which is non-Gaussian having a left exponential tail [3]. Natural time analysis of seismicity in various tectonic regions of the Mexican Pacific Coast has been made in Ref.[4]. The study of the order parameter pdf for the Chiapas area as well as for the Guerrero area shows that the occurrence of large earthquakes in these two areas was not unexpected. References A. Witze, Deadly Mexico quakes not linked, Nature 549, 442 (2017). Varotsos PA, Sarlis NV, Skordas ES, Natural Time Analysis: The new view of time. Precursory Seismic Electric Signals, Earthquakes and other Complex Time-Series (Springer-Verlag, Berlin Heidelberg 2011) P. Varotsos et al., Similarity of fluctuations in correlated systems: the case of seismicity. Phys. Rev. E 72, 041103 (2005) A. Ramírez-Rojas and E.L. Flores-Márquez, Order parameter analysis of seismicity of the Mexican Pacific coast. Physica A, 392 2507 (2013)

  3. Linking giant earthquakes with the subduction of oceanic fracture zones

    NASA Astrophysics Data System (ADS)

    Landgrebe, T. C.; Müller, R. D.; EathByte Group

    2011-12-01

    Giant subduction earthquakes are known to occur in areas not previously identified as prone to high seismic risk. This highlights the need to better identify subduction zone segments potentially dominated by relatively long (up to 1000 years and more) recurrence times of giant earthquakes. Global digital data sets represent a promising source of information for a multi-dimensional earthquake hazard analysis. We combine the NGDC global Significant Earthquakes database with a global strain rate map, gridded ages of the ocean floor, and a recently produced digital data set for oceanic fracture zones, major aseismic ridges and volcanic chains to investigate the association of earthquakes as a function of magnitude with age of the downgoing slab and convergence rates. We use a so-called Top-N recommendation method, a technology originally developed to search, sort, classify, and filter very large and often statistically skewed data sets on the internet, to analyse the association of subduction earthquakes sorted by magnitude with key parameters. The Top-N analysis is used to progressively assess how strongly particular "tectonic niche" locations (e.g. locations along subduction zones intersected with aseismic ridges or volcanic chains) are associated with sets of earthquakes in sorted order in a given magnitude range. As the total number N of sorted earthquakes is increased, by progressively including smaller-magnitude events, the so-called recall is computed, defined as the number of Top-N earthquakes associated with particular target areas divided by N. The resultant statistical measure represents an intuitive description of the effectiveness of a given set of parameters to account for the location of significant earthquakes on record. We use this method to show that the occurrence of great (magnitude ≥ 8) earthquakes on overriding plate segments is strongly biased towards intersections of oceanic fracture zones with subduction zones. These intersection regions are

  4. Nowcasting Earthquakes and Tsunamis

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  5. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  6. Induced seismicity provides insight into why earthquake ruptures stop.

    PubMed

    Galis, Martin; Ampuero, Jean Paul; Mai, P Martin; Cappa, Frédéric

    2017-12-01

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.

  7. Induced seismicity provides insight into why earthquake ruptures stop

    PubMed Central

    Galis, Martin; Ampuero, Jean Paul; Mai, P. Martin; Cappa, Frédéric

    2017-01-01

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures. PMID:29291250

  8. Induced earthquake magnitudes are as large as (statistically) expected

    USGS Publications Warehouse

    Van Der Elst, Nicholas; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas; Hosseini, S. Mehran

    2016-01-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  9. Observing Triggered Earthquakes Across Iran with Calibrated Earthquake Locations

    NASA Astrophysics Data System (ADS)

    Karasozen, E.; Bergman, E.; Ghods, A.; Nissen, E.

    2016-12-01

    We investigate earthquake triggering phenomena in Iran by analyzing patterns of aftershock activity around mapped surface ruptures. Iran has an intense level of seismicity (> 40,000 events listed in the ISC Bulletin since 1960) due to it accommodating a significant portion of the continental collision between Arabia and Eurasia. There are nearly thirty mapped surface ruptures associated with earthquakes of M 6-7.5, mostly in eastern and northwestern Iran, offering a rich potential to study the kinematics of earthquake nucleation, rupture propagation, and subsequent triggering. However, catalog earthquake locations are subject to up to 50 km of location bias from the combination of unknown Earth structure and unbalanced station coverage, making it challenging to assess both the rupture directivity of larger events and the spatial patterns of their aftershocks. To overcome this limitation, we developed a new two-tiered multiple-event relocation approach to obtain hypocentral parameters that are minimally biased and have realistic uncertainties. In the first stage, locations of small clusters of well-recorded earthquakes at local spatial scales (100s of events across 100 km length scales) are calibrated either by using near-source arrival times or independent location constraints (e.g. local aftershock studies, InSAR solutions), using an implementation of the Hypocentroidal Decomposition relocation technique called MLOC. Epicentral uncertainties are typically less than 5 km. Then, these events are used as prior constraints in the code BayesLoc, a Bayesian relocation technique that can handle larger datasets, to yield region-wide calibrated hypocenters (1000s of events over 1000 km length scales). With locations and errors both calibrated, the pattern of aftershock activity can reveal the type of the earthquake triggering: dynamic stress changes promote an increase in the seismicity rate in the direction of unilateral propagation, whereas static stress changes should

  10. An integrated analysis on source parameters, seismogenic structure and seismic hazard of the 2014 Ms 6.3 Kangding earthquake

    NASA Astrophysics Data System (ADS)

    Zheng, Y.

    2016-12-01

    On November 22, 2014, the Ms6.3 Kangding earthquake ended 30 years of history of no strong earthquake at the Xianshuihe fault zone. The focal mechanism and centroid depth of the Kangding earthquake are inverted by teleseismic waveforms and regional seismograms with CAP method. The result shows that the two nodal planes of focal mechanism are 235°/82°/-173° and 144°/83°/-8° respectively, the latter nodal plane should be the ruptured fault plane with a focal depth of 9 km. The rupture process model of the Kangding earthquake is obtained by joint inversion of teleseismic data and regional seismograms. The Kangding earthquake is a bilateral earthquake, and the major rupture zone is within a depth range of 5-15 km, spanning 10 km and 12 km along dip and strike directions, and maximum slip is about 0.5m. Most seismic moment was released during the first 5 s and the magnitude is Mw6.01, smaller than the model determined by InSAR data. The discrepancy between co-seismic rupture models of the Kangding and its Ms 5.8 aftershock and the InSAR model implies significant afterslip deformation occurred in the two weeks after the mainshock. The afterslip released energy equals to an Mw5.9 earthquake and mainly concentrates in the northwest side and the shallower side to the rupture zone. The CFS accumulation near the epicenter of the 2014 Kangding earthquake is increased by the 2008 Wenchuan earthquake, implying that the Kangding earthquake could be triggered by the Wenchuan earthquake. The CFS at the northwest section of the seismic gap along the Kangding-daofu segment is increased by the Kanding earthquake, and the rupture slip of the Kangding earthquake sequence is too small to release the accumulated strain in the seismic gap. Consequently, the northwest section of the Kangding-daofu seismic gap is under high seismic hazard in the future.

  11. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  12. Real data assimilation for optimization of frictional parameters and prediction of afterslip in the 2003 Tokachi-oki earthquake inferred from slip velocity by an adjoint method

    NASA Astrophysics Data System (ADS)

    Kano, Masayuki; Miyazaki, Shin'ichi; Ishikawa, Yoichi; Hiyoshi, Yoshihisa; Ito, Kosuke; Hirahara, Kazuro

    2015-10-01

    Data assimilation is a technique that optimizes the parameters used in a numerical model with a constraint of model dynamics achieving the better fit to observations. Optimized parameters can be utilized for the subsequent prediction with a numerical model and predicted physical variables are presumably closer to observations that will be available in the future, at least, comparing to those obtained without the optimization through data assimilation. In this work, an adjoint data assimilation system is developed for optimizing a relatively large number of spatially inhomogeneous frictional parameters during the afterslip period in which the physical constraints are a quasi-dynamic equation of motion and a laboratory derived rate and state dependent friction law that describe the temporal evolution of slip velocity at subduction zones. The observed variable is estimated slip velocity on the plate interface. Before applying this method to the real data assimilation for the afterslip of the 2003 Tokachi-oki earthquake, a synthetic data assimilation experiment is conducted to examine the feasibility of optimizing the frictional parameters in the afterslip area. It is confirmed that the current system is capable of optimizing the frictional parameters A-B, A and L by adopting the physical constraint based on a numerical model if observations capture the acceleration and decaying phases of slip on the plate interface. On the other hand, it is unlikely to constrain the frictional parameters in the region where the amplitude of afterslip is less than 1.0 cm d-1. Next, real data assimilation for the 2003 Tokachi-oki earthquake is conducted to incorporate slip velocity data inferred from time dependent inversion of Global Navigation Satellite System time-series. The optimized values of A-B, A and L are O(10 kPa), O(102 kPa) and O(10 mm), respectively. The optimized frictional parameters yield the better fit to the observations and the better prediction skill of slip

  13. Earthquake forecasts for the CSEP Japan experiment based on the RI algorithm

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.

    2011-03-01

    An earthquake forecast testing experiment for Japan, the first of its kind, is underway within the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) under a controlled environment. Here we give an overview of the earthquake forecast models, based on the RI algorithm, which we have submitted to the CSEP Japan experiment. Models have been submitted to a total of 9 categories, corresponding to 3 testing classes (3 years, 1 year, and 3 months) and 3 testing regions. The RI algorithm is originally a binary forecast system based on the working assumption that large earthquakes are more likely to occur in the future at locations of higher seismicity in the past. It is based on simple counts of the number of past earthquakes, which is called the Relative Intensity (RI) of seismicity. To improve its forecast performance, we first expand the RI algorithm by introducing spatial smoothing. We then convert the RI representation from a binary system to a CSEP-testable model that produces forecasts for the number of earthquakes of predefined magnitudes. We use information on past seismicity to tune the parameters. The final submittal consists of 36 executable computer codes: 4 variants corresponding to different smoothing parameters for each of the 9 categories. They will help to elucidate which categories and which smoothing parameters are the most meaningful for the RI hypothesis. The main purpose of our participation in the experiment is to better understand the significance of the relative intensity of seismicity for earthquake forecastability in Japan.

  14. Predicted Attenuation Relation and Observed Ground Motion of Gorkha Nepal Earthquake of 25 April 2015

    NASA Astrophysics Data System (ADS)

    Singh, R. P.; Ahmad, R.

    2015-12-01

    A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

  15. Inter-Disciplinary Validation of Pre Earthquake Signals. Case Study for Major Earthquakes in Asia (2004-2010) and for 2011 Tohoku Earthquake

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Hattori, K.; Liu, J.-Y.; Yang. T. Y.; Parrot, M.; Kafatos, M.; Taylor, P.

    2012-01-01

    We carried out multi-sensors observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several physical and environmental parameters, which we found, associated with the earthquake processes: thermal infrared radiation, temperature and concentration of electrons in the ionosphere, radon/ion activities, and air temperature/humidity in the atmosphere. We used satellite and ground observations and interpreted them with the Lithosphere-Atmosphere- Ionosphere Coupling (LAIC) model, one of possible paradigms we study and support. We made two independent continues hind-cast investigations in Taiwan and Japan for total of 102 earthquakes (M>6) occurring from 2004-2011. We analyzed: (1) ionospheric electromagnetic radiation, plasma and energetic electron measurements from DEMETER (2) emitted long-wavelength radiation (OLR) from NOAA/AVHRR and NASA/EOS; (3) radon/ion variations (in situ data); and 4) GPS Total Electron Content (TEC) measurements collected from space and ground based observations. This joint analysis of ground and satellite data has shown that one to six (or more) days prior to the largest earthquakes there were anomalies in all of the analyzed physical observations. For the latest March 11 , 2011 Tohoku earthquake, our analysis shows again the same relationship between several independent observations characterizing the lithosphere /atmosphere coupling. On March 7th we found a rapid increase of emitted infrared radiation observed from satellite data and subsequently an anomaly developed near the epicenter. The GPS/TEC data indicated an increase and variation in electron density reaching a maximum value on March 8. Beginning from this day we confirmed an abnormal TEC variation over the epicenter in the lower ionosphere. These findings revealed the existence of atmospheric and ionospheric phenomena occurring prior to the 2011 Tohoku earthquake, which indicated new evidence of a distinct

  16. Parameterization of 18th January 2011 earthquake in Dalbadin Region, Southwest Pakistan

    NASA Astrophysics Data System (ADS)

    Shafiq-Ur-Rehman; Azeem, Tahir; Abd el-aal, Abd el-aziz Khairy; Nasir, Asma

    2013-12-01

    An earthquake of magnitude 7.3 Mw occurred on 18th January 2011 in Southwestern Pakistan, Baluchistan province (Dalbadin Region). The area has complex tectonics due to interaction of Indian, Eurasian and Arabian plates. Both thrust and strike slip earthquakes are dominant in this region with minor, localized normal faulting events. This earthquake under consideration (Dalbadin Earthquake) posed constraints in depth and focal parameters due to lack of data for evaluation of parameters from Pakistan, Iran or Afghanistan region. Normal faulting mechanism has been proposed by many researchers for this earthquake. In the present study the earthquake was relocated using the technique of travel time residuals. Relocated coordinates and depth were utilized to calculate the focal mechanism solution with outcome of a dominant strike slip mechanism, which is contrary to normal faulting. Relocated coordinates and resulting mechanism are more reliable than many reporting agencies as evaluation in this study is augmented by data from local seismic monitoring network of Pakistan. The tectonics in the area is governed by active subduction along the Makran Subduction Zone. This particular earthquake has strike slip mechanism due to breaking of subducting oceanic plate. This earthquake is located where oceanic lithosphere is subducting along with relative movements between Lut and Helmand blocks. Magnitude of this event i.e. Mw = 7.3, re evaluated depth and a previous study of mechanism of earthquake in same region (Shafiq et al., 2011) also supports the strike slip movement.

  17. Source parameters of the 2016 Menyuan earthquake in the northeastern Tibetan Plateau determined from regional seismic waveforms and InSAR measurements

    NASA Astrophysics Data System (ADS)

    Liu, Yunhua; Zhang, Guohong; Zhang, Yingfeng; Shan, Xinjian

    2018-06-01

    On January 21st, 2016, a Ms 6.4 earthquake hit Menyuan County, Qinghai province, China. The nearest known fault is the Leng Long Ling (LLL) fault which is located approximately 7 km north of the epicenter. This fault has mainly shown sinistral strike-slip movement since the late Quaternary Period. However, the focal mechanism indicates that it is a thrust earthquake, which is different from the well-known strike-slip feature of the LLL fault. In this study, we determined the focal mechanism and primary nodal plane through multi-step inversions in the frequency and time domain by using the broadband regional seismic waveforms recorded by the China Digital Seismic Network (CDSN). Our results show that the rupture duration was short, within 0-2 s after the earthquake, and the rupture expanded upwards along the fault plane. Based on these fault parameters, we then solve for variable slip distribution on the fault plane using the InSAR data. We applied a three-segment fault model to simulate the arc-shaped structure of the northern LLL fault, and obtained a detailed slip distribution on the fault plane. The inversion results show that the maximum slip is 0.43 m, and the average slip angle is 78.8°, with a magnitude of Mw 6.0 and a focal depth of 9.38 km. With the geological structure and the inversion results taken into consideration, it can be suggested that this earthquake was caused by the arc-shaped secondary fault located at the north side of the LLL fault. The secondary fault, together with the LLL fault, forms a normal flower structure. The main LLL fault extends almost vertically into the base rock and the rocks between the two faults form a bulging fault block. Therefore, we infer that this earthquake is the manifestation of a neotectonics movement, in which the bulging fault block is lifted further up under the compresso-shear action caused by the Tibetan Plateau pushing towards the northwest direction.

  18. Urban Earthquake Shaking and Loss Assessment

    NASA Astrophysics Data System (ADS)

    Hancilar, U.; Tuzun, C.; Yenidogan, C.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Level 2 analysis of the ELER Software (similar to HAZUS and SELENA) is essentially intended for earthquake risk assessment (building damage, consequential human casualties and macro economic loss quantifiers) in urban areas. The basic Shake Mapping is similar to the Level 0 and Level 1 analysis however, options are available for more sophisticated treatment of site response through externally entered data and improvement of the shake map through incorporation

  19. Source Parameters of the 8 October, 2005 Mw7.6 Kashmir Earthquake

    NASA Astrophysics Data System (ADS)

    Mandal, Prantik; Chadha, R. K.; Kumar, N.; Raju, I. P.; Satyamurty, C.

    2007-12-01

    During the last six years, the National Geophysical Research Institute, Hyderabad has established a semi-permanent seismological network of 5 broadband seismographs and 10 accelerographs in the Kachchh seismic zone, Gujarat, with the prime objective to monitor the continued aftershock activity of the 2001 Mw7.7 Bhuj mainshock. The reliable and accurate broadband data for the Mw 7.6 (8 Oct., 2005) Kashmir earthquake and its aftershocks from this network, as well as from the Hyderabad Geoscope station, enabled us to estimate the group velocity dispersion characteristics and the one-dimensional regional shear-velocity structure of peninsular India. Firstly, we measure Rayleigh- and Love-wave group velocity dispersion curves in the range of 8 to 35 sec and invert these curves to estimate the crustal and upper mantle structure below the western part of peninsular India. Our best model suggests a two-layered crust: The upper crust is 13.8-km thick with a shear velocity (Vs) of 3.2 km/s; the corresponding values for the lower crust are 24.9 km and 3.7 km/sec. The shear velocity for the upper mantle is found to be 4.65 km/sec. Based on this structure, we perform a moment tensor (MT) inversion of the bandpass (0.05 0.02 Hz) filtered seismograms of the Kashmir earthquake. The best fit is obtained for a source located at a depth of 30 km, with a seismic moment, Mo, of 1.6 × 1027 dyne-cm, and a focal mechanism with strike 19.5°, dip 42°, and rake 167°. The long-period magnitude (MA ~ Mw) of this earthquake is estimated to be 7.31. An analysis of well-developed sPn and sSn regional crustal phases from the bandpassed (0.02 0.25 Hz) seismograms of this earthquake at four stations in Kachchh suggests a focal depth of 30.8 km.

  20. Sensitivity analysis of earthquake-induced static stress changes on volcanoes: the 2010 Mw 8.8 Chile earthquake

    NASA Astrophysics Data System (ADS)

    Bonali, F. L.; Tibaldi, A.; Corazzato, C.

    2015-06-01

    In this work, we analyse in detail how a large earthquake could cause stress changes on volcano plumbing systems and produce possible positive feedbacks in promoting new eruptions. We develop a sensitivity analysis that considers several possible parameters, providing also new constraints on the methodological approach. The work is focus on the Mw 8.8 2010 earthquake that occurred along the Chile subduction zone near 24 historic/Holocene volcanoes, located in the Southern Volcanic Zone. We use six different finite fault-slip models to calculate the static stress change, induced by the coseismic slip, in a direction normal to several theoretical feeder dykes with various orientations. Results indicate different magnitudes of stress change due to the heterogeneity of magma pathway geometry and orientation. In particular, the N-S and NE-SW-striking magma pathways suffer a decrease in stress normal to the feeder dyke (unclamping, up to 0.85 MPa) in comparison to those striking NW-SE and E-W, and in some cases there is even a clamping effect depending on the magma path strike. The diverse fault-slip models have also an effect (up to 0.4 MPa) on the results. As a consequence, we reconstruct the geometry and orientation of the most reliable magma pathways below the 24 volcanoes by studying structural and morphometric data, and we resolve the stress changes on each of them. Results indicate that: (i) volcanoes where post-earthquake eruptions took place experienced earthquake-induced unclamping or very small clamping effects, (ii) several volcanoes that did not erupt yet are more prone to experience future unrest, from the point of view of the host rock stress state, because of earthquake-induced unclamping. Our findings also suggest that pathway orientation plays a more relevant role in inducing stress changes, whereas the depth of calculation (e.g. 2, 5 or 10 km) used in the analysis, is not key a parameter. Earthquake-induced magma-pathway unclamping might contribute to

  1. Strike-slip earthquakes can also be detected in the ionosphere

    NASA Astrophysics Data System (ADS)

    Astafyeva, Elvira; Rolland, Lucie M.; Sladen, Anthony

    2014-11-01

    It is generally assumed that co-seismic ionospheric disturbances are generated by large vertical static displacements of the ground during an earthquake. Consequently, it is expected that co-seismic ionospheric disturbances are only observable after earthquakes with a significant dip-slip component. Therefore, earthquakes dominated by strike-slip motion, i.e. with very little vertical co-seismic component, are not expected to generate ionospheric perturbations. In this work, we use total electron content (TEC) measurements from ground-based GNSS-receivers to study ionospheric response to six recent largest strike-slip earthquakes: the Mw7.8 Kunlun earthquake of 14 November 2001, the Mw8.1 Macquarie earthquake of 23 December 2004, the Sumatra earthquake doublet, Mw8.6 and Mw8.2, of 11 April 2012, the Mw7.7 Balochistan earthquake of 24 September 2013 and the Mw 7.7 Scotia Sea earthquake of 17 November 2013. We show that large strike-slip earthquakes generate large ionospheric perturbations of amplitude comparable with those induced by dip-slip earthquakes of equivalent magnitude. We consider that in the absence of significant vertical static co-seismic displacements of the ground, other seismological parameters (primarily the magnitude of co-seismic horizontal displacements, seismic fault dimensions, seismic slip) may contribute in generation of large-amplitude ionospheric perturbations.

  2. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  3. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  4. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  5. Earthquake source parameters from GPS-measured static displacements with potential for real-time application

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2013-01-01

    We describe a method for determining an optimal centroid-moment tensor solution of an earthquake from a set of static displacements measured using a network of Global Positioning System receivers. Using static displacements observed after the 4 April 2010, MW 7.2 El Mayor-Cucapah, Mexico, earthquake, we perform an iterative inversion to obtain the source mechanism and location, which minimize the least-squares difference between data and synthetics. The efficiency of our algorithm for forward modeling static displacements in a layered elastic medium allows the inversion to be performed in real-time on a single processor without the need for precomputed libraries of excitation kernels; we present simulated real-time results for the El Mayor-Cucapah earthquake. The only a priori information that our inversion scheme needs is a crustal model and approximate source location, so the method proposed here may represent an improvement on existing early warning approaches that rely on foreknowledge of fault locations and geometries.

  6. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  7. How sensitive is earthquake ground motion to source parameters? Insights from a numerical study in the Mygdonian basin

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos

    2014-05-01

    Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics

  8. VLF/LF Amplitude Perturbations before Tuscany Earthquakes, 2013

    NASA Astrophysics Data System (ADS)

    Khadka, Balaram; Kandel, Keshav Prasad; Pant, Sudikshya; Bhatta, Karan; Ghimire, Basu Dev

    2017-12-01

    The US Navy VLF/LF Transmitter's NSY signal (45.9 kHz) transmitted from Niscemi, Sicily, Italy, and received at the Kiel Long Wave Monitor, Germany, was analyzed for the period of two months, May and June (EQ-month) of 2013. There were 12 earthquakes of magnitude greater than 4 that hit Italy in these two months, of which the earthquake of 21st June having magnitude of 5.2 and a shallow focal depth of 5 km was the major one. We studied the earthquake of 21st of June 2013, which struck Tuscany, Central Italy, (44.1713°N and 10.2082°E) at 10:33 UT, and also analyzed the effects of this earthquake on the sub-ionos- pheric VLF/LF signals. In addition, we also studied another earthquake, of magnitude 4.9, which hit the same place at 14:40 UT on 30th of June and had shallow focal depth of 10 km. We assessed the data using terminator time (TT) method and night time fluctuation method and found unusual changes in VLF/LF amplitudes/phases. Analysis of trend, night time dispers! ion, and night time fluctuation was also carried and several anomalies were detected. Most ionospheric perturbations in these parameters were found in the month of June, from few days to few weeks prior to the earthquakes. Moreover, we filtered the possible effects due to geomagnetic storms, auroras, and solar activities using parameters like Dst index, AE index, and Kp index for analyzing the geomagnetic effects, and Bz (sigma) index, sunspot numbers, and solar index F10.7 for analyzing the solar activities for the confirmation of anomalies as precursors.

  9. A Viscoelastic earthquake simulator with application to the San Francisco Bay region

    USGS Publications Warehouse

    Pollitz, Fred F.

    2009-01-01

    Earthquake simulation on synthetic fault networks carries great potential for characterizing the statistical patterns of earthquake occurrence. I present an earthquake simulator based on elastic dislocation theory. It accounts for the effects of interseismic tectonic loading, static stress steps at the time of earthquakes, and postearthquake stress readjustment through viscoelastic relaxation of the lower crust and mantle. Earthquake rupture initiation and termination are determined with a Coulomb failure stress criterion and the static cascade model. The simulator is applied to interacting multifault systems: one, a synthetic two-fault network, and the other, a fault network representative of the San Francisco Bay region. The faults are discretized both along strike and along dip and can accommodate both strike slip and dip slip. Stress and seismicity functions are evaluated over 30,000 yr trial time periods, resulting in a detailed statistical characterization of the fault systems. Seismicity functions such as the coefficient of variation and a- and b-values exhibit systematic patterns with respect to simple model parameters. This suggests that reliable estimation of the controlling parameters of an earthquake simulator is a prerequisite to the interpretation of its output in terms of seismic hazard.

  10. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  11. The 2004 Parkfield, CA Earthquake: A Teachable Moment for Exploring Earthquake Processes, Probability, and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Kafka, A.; Barnett, M.; Ebel, J.; Bellegarde, H.; Campbell, L.

    2004-12-01

    The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquake prediction. That lab included a discussion of the Parkfield Earthquake Prediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquake prediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquake prediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

  12. Tsunami simulations of the 1867 Virgin Island earthquake: Constraints on epicenter location and fault parameters

    USGS Publications Warehouse

    Barkan, Roy; ten Brink, Uri S.

    2010-01-01

    The 18 November 1867 Virgin Island earthquake and the tsunami that closely followed caused considerable loss of life and damage in several places in the northeast Caribbean region. The earthquake was likely a manifestation of the complex tectonic deformation of the Anegada Passage, which cuts across the Antilles island arc between the Virgin Islands and the Lesser Antilles. In this article, we attempt to characterize the 1867 earthquake with respect to fault orientation, rake, dip, fault dimensions, and first tsunami wave propagating phase, using tsunami simulations that employ high-resolution multibeam bathymetry. In addition, we present new geophysical and geological observations from the region of the suggested earthquake source. Results of our tsunami simulations based on relative amplitude comparison limit the earthquake source to be along the northern wall of the Virgin Islands basin, as suggested by Reid and Taber (1920), or on the carbonate platform north of the basin, and not in the Virgin Islands basin, as commonly assumed. The numerical simulations suggest the 1867 fault was striking 120°–135° and had a mixed normal and left-lateral motion. First propagating wave phase analysis suggests a fault striking 300°–315° is also possible. The best-fitting rupture length was found to be relatively small (50 km), probably indicating the earthquake had a moment magnitude of ∼7.2. Detailed multibeam echo sounder surveys of the Anegada Passage bathymetry between St. Croix and St. Thomas reveal a scarp, which cuts the northern wall of the Virgin Islands basin. High-resolution seismic profiles further indicate it to be a reasonable fault candidate. However, the fault orientation and the orientation of other subparallel faults in the area are more compatible with right-lateral motion. For the other possible source region, no clear disruption in the bathymetry or seismic profiles was found on the carbonate platform north of the basin.

  13. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  14. HYPOELLIPSE; a computer program for determining local earthquake hypocentral parameters, magnitude, and first-motion pattern

    USGS Publications Warehouse

    Lahr, John C.

    1999-01-01

    This report provides Fortran source code and program manuals for HYPOELLIPSE, a computer program for determining hypocenters and magnitudes of near regional earthquakes and the ellipsoids that enclose the 68-percent confidence volumes of the computed hypocenters. HYPOELLIPSE was developed to meet the needs of U.S. Geological Survey (USGS) scientists studying crustal and sub-crustal earthquakes recorded by a sparse regional seismograph network. The program was extended to locate hypocenters of volcanic earthquakes recorded by seismographs distributed on and around the volcanic edifice, at elevations above and below the hypocenter. HYPOELLIPSE was used to locate events recorded by the USGS southern Alaska seismograph network from October 1971 to the early 1990s. Both UNIX and PC/DOS versions of the source code of the program are provided along with sample runs.

  15. Quasi-dynamic earthquake fault systems with rheological heterogeneity

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.

    2009-12-01

    Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.

  16. Ground Motion Characteristics of Induced Earthquakes in Central North America

    NASA Astrophysics Data System (ADS)

    Atkinson, G. M.; Assatourians, K.; Novakovic, M.

    2017-12-01

    The ground motion characteristics of induced earthquakes in central North America are investigated based on empirical analysis of a compiled database of 4,000,000 digital ground-motion records from events in induced-seismicity regions (especially Oklahoma). Ground-motion amplitudes are characterized non-parametrically by computing median amplitudes and their variability in magnitude-distance bins. We also use inversion techniques to solve for regional source, attenuation and site response effects. Ground motion models are used to interpret the observations and compare the source and attenuation attributes of induced earthquakes to those of their natural counterparts. Significant conclusions are that the stress parameter that controls the strength of high-frequency radiation is similar for induced earthquakes (depth of h 5 km) and shallow (h 5 km) natural earthquakes. By contrast, deeper natural earthquakes (h 10 km) have stronger high-frequency ground motions. At distances close to the epicenter, a greater focal depth (which increases distance from the hypocenter) counterbalances the effects of a larger stress parameter, resulting in motions of similar strength close to the epicenter, regardless of event depth. The felt effects of induced versus natural earthquakes are also investigated using USGS "Did You Feel It?" reports; 400,000 reports from natural events and 100,000 reports from induced events are considered. The felt reports confirm the trends that we expect based on ground-motion modeling, considering the offsetting effects of the stress parameter versus focal depth in controlling the strength of motions near the epicenter. Specifically, felt intensity for a given magnitude is similar near the epicenter, on average, for all event types and depths. At distances more than 10 km from the epicenter, deeper events are felt more strongly than shallow events. These ground-motion attributes imply that the induced-seismicity hazard is most critical for facilities in

  17. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  18. Investigating landslides caused by earthquakes - A historical review

    USGS Publications Warehouse

    Keefer, D.K.

    2002-01-01

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides. This paper traces the historical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquake are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession of post-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing "retrospective" analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, synthesis of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  19. Determining Hypocentral Parameters for Local Earthquakes in 1-D Using a Genetic Algorithm and Two-point ray tracing

    NASA Astrophysics Data System (ADS)

    Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.

    2005-12-01

    This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing

  20. Earthquake Source Parameter Estimates for the Charlevoix and Western Quebec Seismic Zones in Eastern Canada

    NASA Astrophysics Data System (ADS)

    Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.

    2017-12-01

    The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension

  1. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  2. Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities

    USGS Publications Warehouse

    Duross, Christopher; Olig, Susan; Schwartz, David

    2015-01-01

    Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.

  3. From Multi-Sensors Observations Towards Cross-Disciplinary Study of Pre-Earthquake Signals. What have We Learned from the Tohoku Earthquake?

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Papadopoulos, G.; Kunitsyn, V.; Nesterov, I.; Hayakawa, M.; Mogi, K.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    The lessons we have learned from the Great Tohoku EQ (Japan, 2011) how this knowledge will affect our future observation and analysis is the main focus of this presentation.We present multi-sensors observations and multidisciplinary research in our investigation of phenomena preceding major earthquakes. These observations revealed the existence of atmospheric and ionospheric phenomena occurring prior to theM9.0 Tohoku earthquake of March 11, 2011, which indicates s new evidence of a distinct coupling between the lithosphere and atmosphere/ionosphere, as related to underlying tectonic activity. Similar results have been reported before the catastrophic events in Chile (M8.8, 2010), Italy (M6.3, 2009) and Sumatra (M9.3, 2004). For the Tohoku earthquake, our analysis shows a synergy between several independent observations characterizing the state of the lithosphere /atmosphere coupling several days before the onset of the earthquakes, namely: (i) Foreshock sequence change (rate, space and time); (ii) Outgoing Long wave Radiation (OLR) measured at the top of the atmosphere; and (iii) Anomalous variations of ionospheric parameters revealed by multi-sensors observations. We are presenting a cross-disciplinary analysis of the observed pre-earthquake anomalies and will discuss current research in the detection of these signals in Japan. We expect that our analysis will shed light on the underlying physics of pre-earthquake signals associated with some of the largest earthquake events

  4. Multi-Sensors Observations of Pre-Earthquake Signals. What We Learned from the Great Tohoku Earthquake?

    NASA Technical Reports Server (NTRS)

    Ouzonounov, D.; Pulinets, S.; Papadopoulos, G.; Kunitsyn, V.; Nesterov, I.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    The lessons learned from the Great Tohoku EQ (Japan, 2011) will affect our future observations and an analysis is the main focus of this presentation. Multi-sensors observations and multidisciplinary research is presented in our study of the phenomena preceding major earthquakes Our approach is based on a systematic analysis of several physical and environmental parameters, which been reported by others in connections with earthquake processes: thermal infrared radiation; temperature; concentration of electrons in the ionosphere; radon/ion activities; and atmospheric temperature/humidity [Ouzounov et al, 2011]. We used the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model, one of several possible paradigms [Pulinets and Ouzounov, 2011] to interpret our observations. We retrospectively analyzed the temporal and spatial variations of three different physical parameters characterizing the state of the atmosphere, ionosphere the ground surface several days before the March 11, 2011 M9 Tohoku earthquake Namely: (i) Outgoing Long wave Radiation (OLR) measured at the top of the atmosphere; (ii) Anomalous variations of ionospheric parameters revealed by multi-sensors observations; and (iii) The change in the foreshock sequence (rate, space and time); Our results show that on March 8th, 2011 a rapid increase of emitted infrared radiation was observed and an anomaly developed near the epicenter with largest value occurring on March 11 at 07.30 LT. The GPS/TEC data indicate an increase and variation in electron density reaching a maximum value on March 8. Starting from this day in the lower ionosphere there was also observed an abnormal TEC variation over the epicenter. From March 3 to 11 a large increase in electron concentration was recorded at all four Japanese ground-based ionosondes, which returned to normal after the main earthquake. We use the Japanese GPS network stations and method of Radio Tomography to study the spatiotemporal structure of ionospheric

  5. Source Spectra and Site Response for Two Indonesian Earthquakes: the Tasikmalaya and Kerinci Events of 2009

    NASA Astrophysics Data System (ADS)

    Gunawan, I.; Cummins, P. R.; Ghasemi, H.; Suhardjono, S.

    2012-12-01

    Indonesia is very prone to natural disasters, especially earthquakes, due to its location in a tectonically active region. In September-October 2009 alone, intraslab and crustal earthquakes caused the deaths of thousands of people, severe infrastructure destruction and considerable economic loss. Thus, both intraslab and crustal earthquakes are important sources of earthquake hazard in Indonesia. Analysis of response spectra for these intraslab and crustal earthquakes are needed to yield more detail about earthquake properties. For both types of earthquakes, we have analysed available Indonesian seismic waveform data to constrain source and path parameters - i.e., low frequency spectral level, Q, and corner frequency - at reference stations that appear to be little influenced by site response.. We have considered these analyses for the main shocks as well as several aftershocks. We obtain corner frequencies that are reasonably consistent with the constant stress drop hypothesis. Using these results, we consider using them to extract information about site response form other stations form the Indonesian strong motion network that appear to be strongly affected by site response. Such site response data, as well as earthquake source parameters, are important for assessing earthquake hazard in Indonesia.

  6. Spatial and Temporal Evolution of Earthquake Dynamics: Case Study of the Mw 8.3 Illapel Earthquake, Chile

    NASA Astrophysics Data System (ADS)

    Yin, Jiuxun; Denolle, Marine A.; Yao, Huajian

    2018-01-01

    We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.

  7. Temporal-Spatial Pattern of Pre-earthquake Signatures in Atmosphere and Ionosphere Associated with Major Earthquakes in Greece.

    NASA Astrophysics Data System (ADS)

    Calderon, I. S.; Ouzounov, D.; Anagnostopoulos, G. C.; Pulinets, S. A.; Davidenko, D.; Karastathis, V. K.; Kafatos, M.

    2015-12-01

    We are conducting validation studies on atmosphere/ionosphere phenomena preceding major earthquakes in Greece in the last decade and in particular the largest (M6.9) earthquakes that occurred on May 24, 2014 in the Aegean Sea and on February 14, 2008 in South West Peloponisos (Methoni). Our approach is based on monitoring simultaneously a series of different physical parameters from space: Outgoing long-wavelength radiation (OLR) on the top of the atmosphere, electron and electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC), and ULF radiation and radiation belt electron precipitation (RBEP) accompanied by VLF wave activity into the topside ionosphere. In particular, we analyzed prospectively and retrospectively the temporal and spatial variations of various parameters characterizing the state of the atmosphere and ionosphere several days before the two M6.9 earthquakes. Concerning the Methoni EQ, DEMETER data confirm an almost standard profile before large EQs, with TEC, ULF, VLF and RBEP activity preceding some (four) days the EQ occurrence and silence the day of EQ; furthermore, during the period before the EQ, a progressive concentration of ULF emission centers around the future epicenter was confirmed. Concerning the recent Greek EQ of May 24, 2014, thermal anomaly was discovered 30 days and TEC anomaly 38 hours in advance accordingly. The spatial characteristics of pre-earthquake anomalous behavior were associated with the epicentral region. Our analysis of simultaneous space measurements before the great EQs suggests that they follow a general temporal-spatial pattern, which has been seen in other large EQs worldwide.

  8. Updated earthquake catalogue for seismic hazard analysis in Pakistan

    NASA Astrophysics Data System (ADS)

    Khan, Sarfraz; Waseem, Muhammad; Khan, Muhammad Asif; Ahmed, Waqas

    2018-03-01

    A reliable and homogenized earthquake catalogue is essential for seismic hazard assessment in any area. This article describes the compilation and processing of an updated earthquake catalogue for Pakistan. The earthquake catalogue compiled in this study for the region (quadrangle bounded by the geographical limits 40-83° N and 20-40° E) includes 36,563 earthquake events, which are reported as 4.0-8.3 moment magnitude (M W) and span from 25 AD to 2016. Relationships are developed between the moment magnitude and body, and surface wave magnitude scales to unify the catalogue in terms of magnitude M W. The catalogue includes earthquakes from Pakistan and neighbouring countries to minimize the effects of geopolitical boundaries in seismic hazard assessment studies. Earthquakes reported by local and international agencies as well as individual catalogues are included. The proposed catalogue is further used to obtain magnitude of completeness after removal of dependent events by using four different algorithms. Finally, seismicity parameters of the seismic sources are reported, and recommendations are made for seismic hazard assessment studies in Pakistan.

  9. Fault Branching and Long-Term Earthquake Rupture Scenario for Strike-Slip Earthquake

    NASA Astrophysics Data System (ADS)

    Klinger, Y.; CHOI, J. H.; Vallage, A.

    2017-12-01

    Careful examination of surface rupture for large continental strike-slip earthquakes reveals that for the majority of earthquakes, at least one major branch is involved in the rupture pattern. Often, branching might be either related to the location of the epicenter or located toward the end of the rupture, and possibly related to the stopping of the rupture. In this work, we examine large continental earthquakes that show significant branches at different scales and for which ground surface rupture has been mapped in great details. In each case, rupture conditions are described, including dynamic parameters, past earthquakes history, and regional stress orientation, to see if the dynamic stress field would a priori favor branching. In one case we show that rupture propagation and branching are directly impacted by preexisting geological structures. These structures serve as pathways for the rupture attempting to propagate out of its shear plane. At larger scale, we show that in some cases, rupturing a branch might be systematic, hampering possibilities for the development of a larger seismic rupture. Long-term geomorphology hints at the existence of a strong asperity in the zone where the rupture branched off the main fault. There, no evidence of throughgoing rupture could be seen along the main fault, while the branch is well connected to the main fault. This set of observations suggests that for specific configurations, some rupture scenarios involving systematic branching are more likely than others.

  10. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    USGS Publications Warehouse

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  11. A rare moderate‐sized (Mw 4.9) earthquake in Kansas: Rupture process of the Milan, Kansas, earthquake of 12 November 2014 and its relationship to fluid injection

    USGS Publications Warehouse

    Choy, George; Rubinstein, Justin L.; Yeck, William; McNamara, Daniel E.; Mueller, Charles; Boyd, Oliver

    2016-01-01

    The largest recorded earthquake in Kansas occurred northeast of Milan on 12 November 2014 (Mw 4.9) in a region previously devoid of significant seismic activity. Applying multistation processing to data from local stations, we are able to detail the rupture process and rupture geometry of the mainshock, identify the causative fault plane, and delineate the expansion and extent of the subsequent seismic activity. The earthquake followed rapid increases of fluid injection by multiple wastewater injection wells in the vicinity of the fault. The source parameters and behavior of the Milan earthquake and foreshock–aftershock sequence are similar to characteristics of other earthquakes induced by wastewater injection into permeable formations overlying crystalline basement. This earthquake also provides an opportunity to test the empirical relation that uses felt area to estimate moment magnitude for historical earthquakes for Kansas.

  12. Investigating Landslides Caused by Earthquakes A Historical Review

    NASA Astrophysics Data System (ADS)

    Keefer, David K.

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated withearthquake-induced landslides. This paper traces thehistorical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquakes are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession ofpost-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing ``retrospective'' analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, syntheses of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  13. Implications of fault constitutive properties for earthquake prediction

    USGS Publications Warehouse

    Dieterich, J.H.; Kilgore, B.

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  14. Implications of fault constitutive properties for earthquake prediction.

    PubMed Central

    Dieterich, J H; Kilgore, B

    1996-01-01

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

  15. Implications of fault constitutive properties for earthquake prediction.

    PubMed

    Dieterich, J H; Kilgore, B

    1996-04-30

    The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

  16. Space-Time Earthquake Rate Models for One-Year Hazard Forecasts in Oklahoma

    NASA Astrophysics Data System (ADS)

    Llenos, A. L.; Michael, A. J.

    2017-12-01

    The recent one-year seismic hazard assessments for natural and induced seismicity in the central and eastern US (CEUS) (Petersen et al., 2016, 2017) rely on earthquake rate models based on declustered catalogs (i.e., catalogs with foreshocks and aftershocks removed), as is common practice in probabilistic seismic hazard analysis. However, standard declustering can remove over 90% of some induced sequences in the CEUS. Some of these earthquakes may still be capable of causing damage or concern (Petersen et al., 2015, 2016). The choices of whether and how to decluster can lead to seismicity rate estimates that vary by up to factors of 10-20 (Llenos and Michael, AGU, 2016). Therefore, in order to improve the accuracy of hazard assessments, we are exploring ways to make forecasts based on full, rather than declustered, catalogs. We focus on Oklahoma, where earthquake rates began increasing in late 2009 mainly in central Oklahoma and ramped up substantially in 2013 with the expansion of seismicity into northern Oklahoma and southern Kansas. We develop earthquake rate models using the space-time Epidemic-Type Aftershock Sequence (ETAS) model (Ogata, JASA, 1988; Ogata, AISM, 1998; Zhuang et al., JASA, 2002), which characterizes both the background seismicity rate as well as aftershock triggering. We examine changes in the model parameters over time, focusing particularly on background rate, which reflects earthquakes that are triggered by external driving forces such as fluid injection rather than other earthquakes. After the model parameters are fit to the seismicity data from a given year, forecasts of the full catalog for the following year can then be made using a suite of 100,000 ETAS model simulations based on those parameters. To evaluate this approach, we develop pseudo-prospective yearly forecasts for Oklahoma from 2013-2016 and compare them with the observations using standard Collaboratory for the Study of Earthquake Predictability tests for consistency.

  17. Source parameters of the 1999 Osa peninsula (Costa Rica) earthquake sequence from spectral ratios analysis

    NASA Astrophysics Data System (ADS)

    Verdecchia, A.; Harrington, R. M.; Kirkpatrick, J. D.

    2017-12-01

    Many observations suggest that duration and size scale in a self-similar way for most earthquakes. Deviations from the expected scaling would suggest that some physical feature on the fault surface influences the speed of rupture differently at different length scales. Determining whether differences in scaling exist between small and large earthquakes is complicated by the fact that duration estimates of small earthquakes are often distorted by travel-path and site effects. However, when carefully estimated, scaling relationships between earthquakes may provide important clues about fault geometry and the spatial scales over which it affects fault rupture speed. The Mw 6.9, 20 August 1999, Quepos earthquake occurred on the plate boundary thrust fault along southern Costa Rica margin where the subducting seafloor is cut by numerous normal faults. The mainshock and aftershock sequence were recorded by land and (partially by) ocean bottom (OBS) seismic arrays deployed as part of the CRSEIZE experiment. Here we investigate the size-duration scaling of the mainshock and relocated aftershocks on the plate boundary to determine if a change in scaling exists that is consistent with a change in fault surface geometry at a specific length scale. We use waveforms from 5 short-period land stations and 12 broadband OBS stations to estimate corner frequencies (the inverse of duration) and seismic moment for several aftershocks on the plate interface. We first use spectral amplitudes of single events to estimate corner frequencies and seismic moments. We then adopt a spectral ratio method to correct for non-source-related effects and refine the corner frequency estimation. For the spectral ratio approach, we use pairs of earthquakes with similar waveforms (correlation coefficient > 0.7), with waveform similarity implying event co-location. Preliminary results from single spectra show similar corner frequency values among events of 0.5 ≤ M ≤ 3.6, suggesting a decrease in

  18. Real-time earthquake monitoring using a search engine method

    PubMed Central

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-01-01

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861

  19. The 1868 Hayward Earthquake Alliance: A Case Study - Using an Earthquake Anniversary to Promote Earthquake Preparedness

    NASA Astrophysics Data System (ADS)

    Brocher, T. M.; Garcia, S.; Aagaard, B. T.; Boatwright, J. J.; Dawson, T.; Hellweg, M.; Knudsen, K. L.; Perkins, J.; Schwartz, D. P.; Stoffer, P. W.; Zoback, M.

    2008-12-01

    Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

  20. Prospective testing of Coulomb short-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  1. Earthquake prediction in Japan and natural time analysis of seismicity

    NASA Astrophysics Data System (ADS)

    Uyeda, S.; Varotsos, P.

    2011-12-01

    M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predicted earthquake. In fact, throughout several decades of the National Earthquake Prediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquake prediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

  2. Fault failure with moderate earthquakes

    USGS Publications Warehouse

    Johnston, M.J.S.; Linde, A.T.; Gladwin, M.T.; Borcherdt, R.D.

    1987-01-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake (ML = 6.7, ?? = 51 km), the August 4, 1985, Kettleman Hills earthquake (ML = 5.5, ?? = 34 km), the April 1984 Morgan Hill earthquake (ML = 6.1, ?? = 55 km), the November 1984 Round Valley earthquake (ML = 5.8, ?? = 54 km), the January 14, 1978, Izu, Japan earthquake (ML = 7.0, ?? = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10-8), with borehole dilatometers (resolution 10-10) and a 3-component borehole strainmeter (resolution 10-9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure. ?? 1987.

  3. The Loma Prieta, California, Earthquake of October 17, 1989: Earthquake Occurrence

    USGS Publications Warehouse

    Coordinated by Bakun, William H.; Prescott, William H.

    1993-01-01

    Professional Paper 1550 seeks to understand the M6.9 Loma Prieta earthquake itself. It examines how the fault that generated the earthquake ruptured, searches for and evaluates precursors that may have indicated an earthquake was coming, reviews forecasts of the earthquake, and describes the geology of the earthquake area and the crustal forces that affect this geology. Some significant findings were: * Slip during the earthquake occurred on 35 km of fault at depths ranging from 7 to 20 km. Maximum slip was approximately 2.3 m. The earthquake may not have released all of the strain stored in rocks next to the fault and indicates a potential for another damaging earthquake in the Santa Cruz Mountains in the near future may still exist. * The earthquake involved a large amount of uplift on a dipping fault plane. Pre-earthquake conventional wisdom was that large earthquakes in the Bay area occurred as horizontal displacements on predominantly vertical faults. * The fault segment that ruptured approximately coincided with a fault segment identified in 1988 as having a 30% probability of generating a M7 earthquake in the next 30 years. This was one of more than 20 relevant earthquake forecasts made in the 83 years before the earthquake. * Calculations show that the Loma Prieta earthquake changed stresses on nearby faults in the Bay area. In particular, the earthquake reduced stresses on the Hayward Fault which decreased the frequency of small earthquakes on it. * Geological and geophysical mapping indicate that, although the San Andreas Fault can be mapped as a through going fault in the epicentral region, the southwest dipping Loma Prieta rupture surface is a separate fault strand and one of several along this part of the San Andreas that may be capable of generating earthquakes.

  4. Disaster Metrics: Evaluation of de Boer's Disaster Severity Scale (DSS) Applied to Earthquakes.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; McCord, Caitlin M; Sherak, Raphael A G; Hsu, Edberdt B; Kelen, Gabor D

    2015-02-01

    Quantitative measurement of the medical severity following multiple-casualty events (MCEs) is an important goal in disaster medicine. In 1990, de Boer proposed a 13-point, 7-parameter scale called the Disaster Severity Scale (DSS). Parameters include cause, duration, radius, number of casualties, nature of injuries, rescue time, and effect on surrounding community. Hypothesis This study aimed to examine the reliability and dimensionality (number of salient themes) of de Boer's DSS scale through its application to 144 discrete earthquake events. A search for earthquake events was conducted via National Oceanic and Atmospheric Administration (NOAA) and US Geological Survey (USGS) databases. Two experts in the field of disaster medicine independently reviewed and assigned scores for parameters that had no data readily available (nature of injuries, rescue time, and effect on surrounding community), and differences were reconciled via consensus. Principle Component Analysis was performed using SPSS Statistics for Windows Version 22.0 (IBM Corp; Armonk, New York USA) to evaluate the reliability and dimensionality of the DSS. A total of 144 individual earthquakes from 2003 through 2013 were identified and scored. Of 13 points possible, the mean score was 6.04, the mode = 5, minimum = 4, maximum = 11, and standard deviation = 2.23. Three parameters in the DSS had zero variance (ie, the parameter received the same score in all 144 earthquakes). Because of the zero contribution to variance, these three parameters (cause, duration, and radius) were removed to run the statistical analysis. Cronbach's alpha score, a coefficient of internal consistency, for the remaining four parameters was found to be robust at 0.89. Principle Component Analysis showed uni-dimensional characteristics with only one component having an eigenvalue greater than one at 3.17. The 4-parameter DSS, however, suffered from restriction of scoring range on both parameter and scale levels. Jan de Boer

  5. PAGER--Rapid assessment of an earthquake?s impact

    USGS Publications Warehouse

    Wald, D.J.; Jaiswal, K.; Marano, K.D.; Bausch, D.; Hearne, M.

    2010-01-01

    PAGER (Prompt Assessment of Global Earthquakes for Response) is an automated system that produces content concerning the impact of significant earthquakes around the world, informing emergency responders, government and aid agencies, and the media of the scope of the potential disaster. PAGER rapidly assesses earthquake impacts by comparing the population exposed to each level of shaking intensity with models of economic and fatality losses based on past earthquakes in each country or region of the world. Earthquake alerts--which were formerly sent based only on event magnitude and location, or population exposure to shaking--now will also be generated based on the estimated range of fatalities and economic losses.

  6. Intraplate earthquakes and the state of stress in oceanic lithosphere

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.

    1986-01-01

    The dominant sources of stress relieved in oceanic intraplate earthquakes are investigated to examine the usefulness of earthquakes as indicators of stress orientation. The primary data for this investigation are the detailed source studies of 58 of the largest of these events, performed with a body-waveform inversion technique of Nabelek (1984). The relationship between the earthquakes and the intraplate stress fields was investigated by studying, the rate of seismic moment release as a function of age, the source mechanisms and tectonic associations of larger events, and the depth-dependence of various source parameters. The results indicate that the earthquake focal mechanisms are empirically reliable indicators of stress, probably reflecting the fact that an earthquake will occur most readily on a fault plane oriented in such a way that the resolved shear stress is maximized while the normal stress across the fault, is minimized.

  7. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  8. Do I Really Sound Like That? Communicating Earthquake Science Following Significant Earthquakes at the NEIC

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Earle, P. S.; Benz, H.; Wald, D. J.; Yeck, W. L.

    2017-12-01

    The U.S. Geological Survey's National Earthquake Information Center (NEIC) responds to about 160 magnitude 6.0 and larger earthquakes every year and is regularly inundated with information requests following earthquakes that cause significant impact. These requests often start within minutes after the shaking occurs and come from a wide user base including the general public, media, emergency managers, and government officials. Over the past several years, the NEIC's earthquake response has evolved its communications strategy to meet the changing needs of users and the evolving media landscape. The NEIC produces a cascade of products starting with basic hypocentral parameters and culminating with estimates of fatalities and economic loss. We speed the delivery of content by prepositioning and automatically generating products such as, aftershock plots, regional tectonic summaries, maps of historical seismicity, and event summary posters. Our goal is to have information immediately available so we can quickly address the response needs of a particular event or sequence. This information is distributed to hundreds of thousands of users through social media, email alerts, programmatic data feeds, and webpages. Many of our products are included in event summary posters that can be downloaded and printed for local display. After significant earthquakes, keeping up with direct inquiries and interview requests from TV, radio, and print reports is always challenging. The NEIC works with the USGS Office of Communications and the USGS Science Information Services to organize and respond to these requests. Written executive summaries reports are produced and distributed to USGS personnel and collaborators throughout the country. These reports are updated during the response to keep our message consistent and information up to date. This presentation will focus on communications during NEIC's rapid earthquake response but will also touch on the broader USGS traditional and

  9. Investigation of earthquake factor for optimum tuned mass dampers

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  10. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    NASA Astrophysics Data System (ADS)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  11. The 1911 M ~6.6 Calaveras earthquake: Source parameters and the role of static, viscoelastic, and dynamic coulomb stress changes imparted by the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Doser, D.I.; Olsen, K.B.; Pollitz, F.F.; Stein, R.S.; Toda, S.

    2009-01-01

    The occurrence of a right-lateral strike-slip earthquake in 1911 is inconsistent with the calculated 0.2-2.5 bar static stress decrease imparted by the 1906 rupture at that location on the Calaveras fault, and 5 yr of calculated post-1906 viscoelastic rebound does little to reload the fault. We have used all available first-motion, body-wave, and surface-wave data to explore possible focal mechanisms for the 1911 earthquake. We find that the event was most likely a right-lateral strikeslip event on the Calaveras fault, larger than, but otherwise resembling, the 1984 Mw 6.1 Morgan Hill earthquake in roughly the same location. Unfortunately, we could recover no unambiguous surface fault offset or geodetic strain data to corroborate the seismic analysis despite an exhaustive archival search. We calculated the static and dynamic Coulomb stress changes for three 1906 source models to understand stress transfer to the 1911 site. In contrast to the static stress shadow, the peak dynamic Coulomb stress imparted by the 1906 rupture promoted failure at the site of the 1911 earthquake by 1.4-5.8 bar. Perhaps because the sample is small and the aftershocks are poorly located, we find no correlation of 1906 aftershock frequency or magnitude with the peak dynamic stress, although all aftershocks sustained a calculated dynamic stress of ???3 bar. Just 20 km to the south of the 1911 epicenter, we find that surface creep of the Calaveras fault at Hollister paused for ~17 yr after 1906, about the expected delay for the calculated static stress drop imparted by the 1906 earthquake when San Andreas fault postseismic creep and viscoelastic relaxation are included. Thus, the 1911 earthquake may have been promoted by the transient dynamic stresses, while Calaveras fault creep 20 km to the south appears to have been inhibited by the static stress changes.

  12. Seismicity Pattern and Fault Structure in the Central Himalaya Seismic Gap Using Precise Earthquake Hypocenters and their Source Parameters

    NASA Astrophysics Data System (ADS)

    Mendoza, M.; Ghosh, A.; Rai, S. S.

    2017-12-01

    The devastation brought on by the Mw 7.8 Gorkha earthquake in Nepal on 25 April 2015, reconditioned people to the high earthquake risk along the Himalayan arc. It is therefore imperative to learn from the Gorkha earthquake, and gain a better understanding of the state of stress in this fault regime, in order to identify areas that could produce the next devastating earthquake. Here, we focus on what is known as the "central Himalaya seismic gap". It is located in Uttarakhand, India, west of Nepal, where a large (> Mw 7.0) earthquake has not occurred for over the past 200 years [Rajendran, C.P., & Rajendran, K., 2005]. This 500 - 800 km long along-strike seismic gap has been poorly studied, mainly due to the lack of modern and dense instrumentation. It is especially concerning since it surrounds densely populated cities, such as New Delhi. In this study, we analyze a rich seismic dataset from a dense network consisting of 50 broadband stations, that operated between 2005 and 2012. We use the STA/LTA filter technique to detect earthquake phases, and the latest tools contributed to the Antelope software environment, to develop a large and robust earthquake catalog containing thousands of precise hypocentral locations, magnitudes, and focal mechanisms. By refining those locations in HypoDD [Waldhauser & Ellsworth, 2000] to form a tighter cluster of events using relative relocation, we can potentially illustrate fault structures in this region with high resolution. Additionally, using ZMAP [Weimer, S., 2001], we perform a variety of statistical analyses to understand the variability and nature of seismicity occurring in the region. Generating a large and consistent earthquake catalog not only brings to light the physical processes controlling the earthquake cycle in an Himalayan seismogenic zone, it also illustrates how stresses are building up along the décollment and the faults that stem from it. With this new catalog, we aim to reveal fault structure, study

  13. Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr

    Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.

  14. An interdisciplinary approach to study Pre-Earthquake processes

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hattori, K.; Taylor, P. T.

    2017-12-01

    We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquake studies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquake predictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

  15. Earthquake response of storey building in Jakarta using accelerographs data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Julius, Admiral Musa, E-mail: admiralmusajulius@yahoo.com; Jakarta Geophysics Observatory, Indonesia Agency of Meteorology Climatology and Geophysics; Sunardi, Bambang, E-mail: b.sunardi@gmail.com

    As seismotectonic, the Jakarta city will be greatly affected by the earthquake which originated from the subduction zone of the Sunda Strait and south of Java. Some occurrences of earthquakes in these location are often perceived by the occupants in the upper floors of multi-storey buildings in Jakarta but was not perceived by the occupants on the ground floor. The case shows the difference in ground-motion parameters on each floor height. The analysis of the earthquake data recorded by accelerographs on different floors need to be done to know the differences in ground-motion parameters. Data used in this research ismore » accelerograph data installed on several floors in the main building of Meteorology Climatology and Geophysics Agency with a case study of Kebumen earthquake on January 25{sup th} 2014. Parameters analyzed include the Peak Ground Acceleration (PGA), Peak Ground Displacement (PGD), Peak Spectral Acceleration (PSA), Amplification (Ag), and the Effective Duration of earthquake (t{sub e}). Research stages include accelerographs data acquisition in three (3) different floors, conversion and data partition for each component, conversion to units of acceleration, determination of PGA, PGD, PSA, Ag and t{sub e} as well as data analysis. The study shows the value of PGA on the ground floor, 7{sup th} floor and 15{sup th} floors, respectively are 0.016 g, 0.053 g and 0.116 g. PGD on the ground floor, 7{sup th} floor and 15{sup th} floor respectively are 2.15 cm, 2.98 cm and 4.92 cm. PSA on the ground floor, 7{sup th} floor and 15{sup th} floor respectively are 0.067 g, 0.308 g and 0.836 g. Amplification of the peak acceleration value on the ground floor, 7{sup th} floor and 15{sup th} floor to the surface rock are 4.37, 6.07 and 7.30. Effective duration of the earthquake on the ground floor, 7{sup th} floor and 15{sup th} floor respectively are 222.28 s, 202.28 s and 91.58 s. In general, with increasing floor of the building, the value

  16. W phase source inversion for moderate to large earthquakes (1990-2010)

    USGS Publications Warehouse

    Duputel, Zacharie; Rivera, Luis; Kanamori, Hiroo; Hayes, Gavin P.

    2012-01-01

    Rapid characterization of the earthquake source and of its effects is a growing field of interest. Until recently, it still took several hours to determine the first-order attributes of a great earthquake (e.g. Mw≥ 7.5), even in a well-instrumented region. The main limiting factors were data saturation, the interference of different phases and the time duration and spatial extent of the source rupture. To accelerate centroid moment tensor (CMT) determinations, we have developed a source inversion algorithm based on modelling of the W phase, a very long period phase (100–1000 s) arriving at the same time as the P wave. The purpose of this work is to finely tune and validate the algorithm for large-to-moderate-sized earthquakes using three components of W phase ground motion at teleseismic distances. To that end, the point source parameters of all Mw≥ 6.5 earthquakes that occurred between 1990 and 2010 (815 events) are determined using Federation of Digital Seismograph Networks, Global Seismographic Network broad-band stations and STS1 global virtual networks of the Incorporated Research Institutions for Seismology Data Management Center. For each event, a preliminary magnitude obtained from W phase amplitudes is used to estimate the initial moment rate function half duration and to define the corner frequencies of the passband filter that will be applied to the waveforms. Starting from these initial parameters, the seismic moment tensor is calculated using a preliminary location as a first approximation of the centroid. A full CMT inversion is then conducted for centroid timing and location determination. Comparisons with Harvard and Global CMT solutions highlight the robustness of W phase CMT solutions at teleseismic distances. The differences in Mw rarely exceed 0.2 and the source mechanisms are very similar to one another. Difficulties arise when a target earthquake is shortly (e.g. within 10 hr) preceded by another large earthquake, which disturbs the

  17. Catalog of earthquake hypocenters at Alaskan volcanoes: January 1 through December 31, 2011

    USGS Publications Warehouse

    Dixon, James P.; Stihler, Scott D.; Power, John A.; Searcy, Cheryl K.

    2012-01-01

    Between January 1 and December 31, 2011, the Alaska Volcano Observatory (AVO) located 4,364 earthquakes, of which 3,651 occurred within 20 kilometers of the 33 volcanoes with seismograph subnetworks. There was no significant seismic activity above background levels in 2011 at these instrumented volcanic centers. This catalog includes locations, magnitudes, and statistics of the earthquakes located in 2011 with the station parameters, velocity models, and other files used to locate these earthquakes.

  18. Limits on great earthquake size at subduction zones

    NASA Astrophysics Data System (ADS)

    McCaffrey, R.

    2012-12-01

    Subduction zones are where the world's greatest earthquakes occur due to the large fault area available to slip. Yet some subduction zones are thought to be immune from these massive events, where quake size is limited by some physical processes or properties. Accordingly, the size of the 2011 Tohoku-oki Mw 9.0 earthquake caught some in the earthquake research community by surprise. The expectations of these massive quakes have been driven in the past by reliance on our short, incomplete history of earthquakes and causal relationships derived from it. The logic applied is that if a great earthquake has not happened in the past, that we know of, one cannot happen in the future. Using the ~100-year global earthquake seismological history, and in some cases extended with geologic observations, relationships between maximum earthquake sizes and other properties of subduction zones are suggested, leading to the notion that some subduction zones, like the Japan Trench, would never produce a magnitude ~9 event. Empirical correlations of earthquake behavior with other subduction parameters can give false positive results when the data are incomplete or incorrect, of small numbers and numerous attributes are examined. Given multi-century return times of the greatest earthquakes, ignorance of those return times and our relatively limited temporal observation span (in most places), I suggest that we cannot yet rule out great earthquakes at any subduction zones. Alternatively, using the length of a subduction zone that is available for slip as the predominant factor in determining maximum earthquake size, we cannot rule out that any subduction zone of a few hundred kilometers or more in length may be capable of producing a magnitude 9 or larger earthquake. Based on this method, the expected maximum size for the Japan Trench was 9.0 (McCaffrey, Geology, p. 263, 2008). The same approach indicates that a M > 9 off Java, with twice the population density as Honshu and much lower

  19. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  20. On relating apparent stress to the stress causing earthquake fault slip

    USGS Publications Warehouse

    McGarr, A.

    1999-01-01

    Apparent stress ??a is defined as ??a = ??????, where ???? is the average shear stress loading the fault plane to cause slip and ?? is the seismic efficiency, defined as Ea/W, where Ea is the energy radiated seismically and W is the total energy released by the earthquake. The results of a recent study in which apparent stresses of mining-induced earthquakes were compared to those measured for laboratory stick-slip friction events led to the hypothesis that ??a/???? ??? 0.06. This hypothesis is tested here against a substantially augmented data set of earthquakes for which ???? can be estimated, mostly from in situ stress measurements, for comparison with ??a. The expanded data set, which includes earthquakes artificially triggered at a depth of 9 km in the German Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschland (KTB) borehole and natural tectonic earthquakes, covers a broad range of hypocentral depths, rock types, pore pressures, and tectonic settings. Nonetheless, over ???14 orders of magnitude in seismic moment, apparent stresses exhibit distinct upper bounds defined by a maximum seismic efficiency of ???0.06, consistent with the hypothesis proposed before. This behavior of ??a and ?? can be expressed in terms of two parameters measured for stick-slip friction events in the laboratory: the ratio of the static to the dynamic coefficient of friction and the fault slip overshoot. Typical values for these two parameters yield seismic efficiencies of ???0.06. In contrast to efficiencies for laboratory events for which ?? is always near 0.06, those for earthquakes tend to be less than this bounding value because Ea for earthquakes is usually underestimated due to factors such as band-limited recording. Thus upper bounds on ??a/???? appear to be controlled by just a few fundamental aspects of frictional stick-slip behavior that are common to shallow earthquakes everywhere. Estimates of ???? from measurements of ??a for suites of earthquakes, using ??a

  1. Radon anomaly in soil gas as an earthquake precursor.

    PubMed

    Miklavcić, I; Radolić, V; Vuković, B; Poje, M; Varga, M; Stanić, D; Planinić, J

    2008-10-01

    The mechanical processes of earthquake preparation are always accompanied by deformations; afterwards, the complex short- or long-term precursory phenomena can appear. Anomalies of radon concentrations in soil gas are registered a few weeks or months before many earthquakes. Radon concentrations in soil gas were continuously measured by the LR-115 nuclear track detectors at site A (Osijek) during a 4-year period, as well as by the Barasol semiconductor detector at site B (Kasina) during 2 years. We investigated the influence of the meteorological parameters on the temporal radon variations, and we determined the equation of the multiple regression that enabled the reduction (deconvolution) of the radon variation caused by the barometric pressure, rainfall and temperature. The pre-earthquake radon anomalies at site A indicated 46% of the seismic events, on criterion M>or=3, R<200 km, and 21% at site B. Empirical equations between earthquake magnitude, epicenter distance and precursor time enabled estimation or prediction of an earthquake that will rise at the epicenter distance R from the monitoring site in expecting precursor time T.

  2. A strong correlation between induced peak dynamic Coulomb stress change from the 1992 M7.3 Landers, California, earthquake and the hypocenter of the 1999 M7.1 Hector Mine, California, earthquake

    NASA Astrophysics Data System (ADS)

    Kilb, Debi

    2003-01-01

    The 1992 M7.3 Landers earthquake may have played a role in triggering the 1999 M7.1 Hector Mine earthquake as suggested by their close spatial (˜20 km) proximity. Current investigations of triggering by static stress changes produce differing conclusions when small variations in parameter values are employed. Here I test the hypothesis that large-amplitude dynamic stress changes, induced by the Landers rupture, acted to promote the Hector Mine earthquake. I use a flat layer reflectivity method to model the Landers earthquake displacement seismograms. By requiring agreement between the model seismograms and data, I can constrain the Landers main shock parameters and velocity model. A similar reflectivity method is used to compute the evolution of stress changes. I find a strong positive correlation between the Hector Mine hypocenter and regions of large (>4 MPa) dynamic Coulomb stress changes (peak Δσf(t)) induced by the Landers main shock. A positive correlation is also found with large dynamic normal and shear stress changes. Uncertainties in peak Δσf(t) (1.3 MPa) are only 28% of the median value (4.6 MPa) determined from an extensive set (160) of model parameters. Therefore the correlation with dynamic stresses is robust to a range of Hector Mine main shock parameters, as well as to variations in the friction and Skempton's coefficients used in the calculations. These results imply dynamic stress changes may be an important part of earthquake trigging, such that large-amplitude stress changes alter the properties of an existing fault in a way that promotes fault failure.

  3. Extending the ISC-GEM Global Earthquake Instrumental Catalogue

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Engdhal, Bob; Storchak, Dmitry; Villaseñor, Antonio; Harris, James

    2015-04-01

    After a 27-month project funded by the GEM Foundation (www.globalquakemodel.org), in January 2013 we released the ISC-GEM Global Instrumental Earthquake Catalogue (1900 2009) (www.isc.ac.uk/iscgem/index.php) as a special product to use for seismic hazard studies. The new catalogue was necessary as improved seismic hazard studies necessitate that earthquake catalogues are homogeneous (to the largest extent possible) over time in their fundamental parameters, such as location and magnitude. Due to time and resource limitation, the ISC-GEM catalogue (1900-2009) included earthquakes selected according to the following time-variable cut-off magnitudes: Ms=7.5 for earthquakes occurring before 1918; Ms=6.25 between 1918 and 1963; and Ms=5.5 from 1964 onwards. Because of the importance of having a reliable seismic input for seismic hazard studies, funding from GEM and two commercial companies in the US and UK allowed us to start working on the extension of the ISC-GEM catalogue both for earthquakes that occurred beyond 2009 and for earthquakes listed in the International Seismological Summary (ISS) which fell below the cut-off magnitude of 6.25. This extension is part of a four-year program that aims at including in the ISC-GEM catalogue large global earthquakes that occurred before the beginning of the ISC Bulletin in 1964. In this contribution we present the updated ISC GEM catalogue, which will include over 1000 more earthquakes that occurred in 2010 2011 and several hundreds more between 1950 and 1959. The catalogue extension between 1935 and 1949 is currently underway. The extension of the ISC-GEM catalogue will also be helpful for regional cross border seismic hazard studies as the ISC-GEM catalogue should be used as basis for cross-checking the consistency in location and magnitude of those earthquakes listed both in the ISC GEM global catalogue and regional catalogues.

  4. Five parameters for the evaluation of the soil nonlinearity during the Ms8.0 Wenchuan Earthquake using the HVSR method

    NASA Astrophysics Data System (ADS)

    Ren, Yefei; Wen, Ruizhi; Yao, Xinxin; Ji, Kun

    2017-08-01

    The consideration of soil nonlinearity is important for the accurate estimation of the site response. To evaluate the soil nonlinearity during the 2008 Ms8.0 Wenchuan Earthquake, 33 strong-motion records obtained from the main shock and 890 records from 157 aftershocks were collected for this study. The horizontal-to-vertical spectral ratio (HVSR) method was used to calculate five parameters: the ratio of predominant frequency (RFp), degree of nonlinearity (DNL), absolute degree of nonlinearity (ADNL), frequency of nonlinearity (fNL), and percentage of nonlinearity (PNL). The purpose of this study was to evaluate the soil nonlinearity level of 33 strong-motion stations and to investigate the characteristics, performance, and effective usage of these five parameters. Their correlations with the peak ground acceleration (PGA), peak ground velocity (PGV), average uppermost 30-m shear-wave velocity ( V S30), and maximum amplitude of HVSR ( A max) were investigated. The results showed that all five parameters correlate well with PGA and PGV. The DNL, ADNL, and PNL also show a good correlation with A max, which means that the degree of soil nonlinearity not only depends on the ground-motion amplitude (e.g., PGA and PGV) but also on the site condition. The fNL correlates with PGA and PGV but shows no correlation with either A max or V S30, implying that the frequency width affected by the soil nonlinearity predominantly depends on the ground-motion amplitude rather than the site condition. At 16 of the 33 stations analyzed in this study, the site response showed evident (i.e., strong and medium) nonlinearity during the main shock of the Wenchuan Earthquake, where the ground-motion level was almost beyond the threshold of PGA > 200 cm/s2 or PGV > 15 cm/s. The site response showed weak and no nonlinearity at the other 14 and 3 stations. These results also confirm that RFp, DNL, ADNL, and PNL are effective in identifying the soil nonlinearity behavior. The identification

  5. Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling

    NASA Astrophysics Data System (ADS)

    Popescu, Emilia; Otilia Placinta, Anica; Borleasnu, Felix; Radulian, Mircea

    2017-12-01

    The Vrancea seismic nest, located at the South-Eastern Carpathians Arc bend, in Romania, is a well-confined cluster of seismicity at intermediate depth (60 - 180 km). During the last 100 years four major shocks were recorded in the lithosphere body descending almost vertically beneath the Vrancea region: 10 November 1940 (Mw 7.7, depth 150 km), 4 March 1977 (Mw 7.4, depth 94 km), 30 August 1986 (Mw 7.1, depth 131 km) and a double shock on 30 and 31 May 1990 (Mw 6.9, depth 91 km and Mw 6.4, depth 87 km, respectively). The probability of repeated earthquakes in the Vrancea seismogenic volume is relatively large taking into account the high density of foci. The purpose of the present paper is to investigate source parameters and clustering properties for the repetitive earthquakes (located close each other) recorded in the Vrancea seismogenic subcrustal region. To this aim, we selected a set of earthquakes as templates for different co-located groups of events covering the entire depth range of active seismicity. For the identified clusters of repetitive earthquakes, we applied spectral ratios technique and empirical Green’s function deconvolution, in order to constrain as much as possible source parameters. Seismicity patterns of repeated earthquakes in space, time and size are investigated in order to detect potential interconnections with larger events. Specific scaling properties are analyzed as well. The present analysis represents a first attempt to provide a strategy for detecting and monitoring possible interconnections between different nodes of seismic activity and their role in modelling tectonic processes responsible for generating the major earthquakes in the Vrancea subcrustal seismogenic source.

  6. Earthquakes for Kids

    MedlinePlus

    ... across a fault to learn about past earthquakes. Science Fair Projects A GPS instrument measures slow movements of the ground. Become an Earthquake Scientist Cool Earthquake Facts Today in Earthquake History A scientist stands in ...

  7. Towards Estimating the Magnitude of Earthquakes from EM Data Collected from the Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.

    2016-12-01

    During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. During the past three years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone. Such evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. The process has been extended in time, only pulses associated with the occurrence of earthquakes have been used and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including an animated data video, are a first approximation towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone.

  8. Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network

    NASA Astrophysics Data System (ADS)

    Zulfikar, Can; Kariptas, Cagatay; Biyikoglu, Hikmet; Ozarpa, Cevat

    2017-04-01

    Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network Istanbul Natural Gas Distribution Corporation (IGDAS) is one of the end users of the Istanbul Earthquake Early Warning (EEW) signal. IGDAS, the primary natural gas provider in Istanbul, operates an extensive system 9,867km of gas lines with 750 district regulators and 474,000 service boxes. The natural gas comes to Istanbul city borders with 70bar in 30inch diameter steel pipeline. The gas pressure is reduced to 20bar in RMS stations and distributed to district regulators inside the city. 110 of 750 district regulators are instrumented with strong motion accelerometers in order to cut gas flow during an earthquake event in the case of ground motion parameters exceeds the certain threshold levels. Also, state of-the-art protection systems automatically cut natural gas flow when breaks in the gas pipelines are detected. IGDAS uses a sophisticated SCADA (supervisory control and data acquisition) system to monitor the state-of-health of its pipeline network. This system provides real-time information about quantities related to pipeline monitoring, including input-output pressure, drawing information, positions of station and RTU (remote terminal unit) gates, slum shut mechanism status at 750 district regulator sites. IGDAS Real-time Earthquake Risk Reduction algorithm follows 4 stages as below: 1) Real-time ground motion data transmitted from 110 IGDAS and 110 KOERI (Kandilli Observatory and Earthquake Research Institute) acceleration stations to the IGDAS Scada Center and KOERI data center. 2) During an earthquake event EEW information is sent from IGDAS Scada Center to the IGDAS stations. 3) Automatic Shut-Off is applied at IGDAS district regulators, and calculated parameters are sent from stations to the IGDAS Scada Center and KOERI. 4) Integrated building and gas pipeline damage maps are prepared immediately after the earthquake event. The today's technology allows to rapidly estimate the

  9. Preliminary Results of Earthquake-Induced Building Damage Detection with Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Sabuncu, A.; Uca Avci, Z. D.; Sunar, F.

    2016-06-01

    Earthquakes are the most destructive natural disasters, which result in massive loss of life, infrastructure damages and financial losses. Earthquake-induced building damage detection is a very important step after earthquakes since earthquake-induced building damage is one of the most critical threats to cities and countries in terms of the area of damage, rate of collapsed buildings, the damage grade near the epicenters and also building damage types for all constructions. Van-Ercis (Turkey) earthquake (Mw= 7.1) was occurred on October 23th, 2011; at 10:41 UTC (13:41 local time) centered at 38.75 N 43.36 E that places the epicenter about 30 kilometers northern part of the city of Van. It is recorded that, 604 people died and approximately 4000 buildings collapsed or seriously damaged by the earthquake. In this study, high-resolution satellite images of Van-Ercis, acquired by Quickbird-2 (Digital Globe Inc.) after the earthquake, were used to detect the debris areas using an object-based image classification. Two different land surfaces, having homogeneous and heterogeneous land covers, were selected as case study areas. As a first step of the object-based image processing, segmentation was applied with a convenient scale parameter and homogeneity criterion parameters. As a next step, condition based classification was used. In the final step of this preliminary study, outputs were compared with streetview/ortophotos for the verification and evaluation of the classification accuracy.

  10. Atmospheric Signals Associated with Major Earthquakes. A Multi-Sensor Approach. Chapter 9

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Kafatos, Menas; Taylor, Patrick

    2011-01-01

    We are studying the possibility of a connection between atmospheric observation recorded by several ground and satellites as earthquakes precursors. Our main goal is to search for the existence and cause of physical phenomenon related to prior earthquake activity and to gain a better understanding of the physics of earthquake and earthquake cycles. The recent catastrophic earthquake in Japan in March 2011 has provided a renewed interest in the important question of the existence of precursory signals preceding strong earthquakes. We will demonstrate our approach based on integration and analysis of several atmospheric and environmental parameters that were found associated with earthquakes. These observations include: thermal infrared radiation, radon! ion activities; air temperature and humidity and a concentration of electrons in the ionosphere. We describe a possible physical link between atmospheric observations with earthquake precursors using the latest Lithosphere-Atmosphere-Ionosphere Coupling model, one of several paradigms used to explain our observations. Initial results for the period of2003-2009 are presented from our systematic hind-cast validation studies. We present our findings of multi-sensor atmospheric precursory signals for two major earthquakes in Japan, M6.7 Niigata-ken Chuetsu-oki of July16, 2007 and the latest M9.0 great Tohoku earthquakes of March 11,2011

  11. Validating of Atmospheric Signals Associated with some of the Major Earthquakes in Asia (2003-2009)

    NASA Technical Reports Server (NTRS)

    Ouzounov, D. P.; Pulinets, S.; Liu, J. Y.; Hattori, K.; Oarritm N,; Taylor, P. T.

    2010-01-01

    The recent catastrophic earthquake in Haiti (January 2010) has provided and renewed interest in the important question of the existence of precursory signals related to strong earthquakes. Latest studies (VESTO workshop in Japan 2009) have shown that there were precursory atmospheric signals observed on the ground and in space associated with several recent earthquakes. The major question, still widely debated in the scientific community is whether such signals systematically precede major earthquakes. To address this problem we have started to validate the anomalous atmospheric signals during the occurrence of large earthquakes. Our approach is based on integration analysis of several physical and environmental parameters (thermal infrared radiation, electron concentration in the ionosphere, Radon/ion activities, air temperature and seismicity) that were found to be associated with earthquakes. We performed hind-cast detection over three different regions with high seismicity Taiwan, Japan and Kamchatka for the period of 2003-2009. We are using existing thermal satellite data (Aqua and POES); in situ atmospheric data (NOAA/NCEP); and ionospheric variability data (GPS/TEC and DEMETER). The first part of this validation included 42 major earthquakes (M greater than 5.9): 10 events in Taiwan, 15 events in Japan, 15 events in Kamchatka and four most recent events for M8.0 Wenchuan earthquake (May 2008) in China and M7.9 Samoa earthquakes (Sep 2009). Our initial results suggest a systematic appearance of atmospheric anomalies near the epicentral area, 1 to 5 days prior to the largest earthquakes, that could be explained by a coupling process between the observed physical parameters, and the earthquake preparation processes.

  12. Spatio-Temporal Fluctuations of the Earthquake Magnitude Distribution: Robust Estimation and Predictive Power

    NASA Astrophysics Data System (ADS)

    Olsen, S.; Zaliapin, I.

    2008-12-01

    We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquake prediction and seismic hazard assessment methods.

  13. Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.

    2013-05-01

    Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.

  14. Depth variations of friction rate parameter derived from dynamic modeling of GPS afterslip associated with the 2003 Mw 6.5 Chengkung earthquake in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, J. C.; Liu, Z. Y. C.; Shirzaei, M.

    2016-12-01

    The Chihshang fault lies at the plate suture between the Eurasian and the Philippine Sea plates along the Longitudinal Valley in eastern Taiwan. Here we investigate depth variation of fault frictional parameters derived from the post-seismic slip model of the 2003 Mw 6.5 Chengkung earthquake. Assuming a rate-strengthening friction, we implement an inverse dynamic modeling scheme to estimate the frictional parameter (a-b) and reference friction coefficient (μ*) in depths by taking into account: pre-seismic stress as well as co-seismic and post-seismic coulomb stress changes associated with the 2003 Chengkung earthquake. We investigate two coseismic models by Hsu et al. (2009) and Thomas et al. (2014). Model parameters, including stress gradient, depth dependent a-b and μ*, are determined from fitting the transient post-seismic geodetic signal measured at 12 continuous GPS stations. In our inversion scheme, we apply a non-linear optimization algorithm, Genetic Algorithm (GA), to search for the optimum frictional parameters. Considering the zone with velocity-strengthening frictional properties along Chihshang fault, the optimum a-b is 7-8 × 10-3 along the shallow part of the fault (0-10 km depth) and 1-2 × 10-2 in 22-28 km depth. Optimum solution for μ* is 0.3-0.4 in 0-10 km depth and reaches 0.8 in 22-28 km depth. The optimized stress gradient is 54 MPa/ km. The inferred frictional parameters are consistent with the laboratory measurements on clay-rich fault zone gouges comparable to the Lichi Melange, which is thrust over Holocene alluvial deposits across the Chihshang fault, considering the main rock composition of the Chihshang fault, at least at the upper kilometers level of the fault. Our results can facilitate further studies in particular on seismic cycle and hazard assessment of active faults.

  15. Effect of slip-area scaling on the earthquake frequency-magnitude relationship

    NASA Astrophysics Data System (ADS)

    Senatorski, Piotr

    2017-06-01

    The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.

  16. A Multi-parametric Climatological Approach to Study the 2016 Amatrice-Norcia (Central Italy) Earthquake Preparatory Phase

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; De Santis, Angelo; Marchetti, Dedalo; Cianchini, Gianfranco

    2017-10-01

    Based on observations prior to earthquakes, recent theoretical considerations suggest that some geophysical quantities reveal abnormal changes that anticipate moderate and strong earthquakes, within a defined spatial area (the so-called Dobrovolsky area) according to a lithosphere-atmosphere-ionosphere coupling model. One of the possible pre-earthquake effects could be the appearance of some climatological anomalies in the epicentral region, weeks/months before the major earthquakes. In this paper, the period of 2 months preceding the Amatrice-Norcia (Central Italy) earthquake sequence, that started on 24 August 2016 with an M6 earthquake and a few months later produced other two major shocks (i.e. an M5.9 on 26 October and then an M6.5 on 30 October), was analyzed in terms of skin temperature, total column water vapour and total column of ozone, compared with the past 37-year trend. The novelty of the method stands in the way the complete time series is reduced, where also the possible effect of global warming is properly removed. The simultaneous analysis showed the presence of persistent contemporary anomalies in all of the analysed parameters. To validate the technique, a confutation/confirmation analysis was undertaken where these parameters were successfully analyzed in the same months but considering a seismically "calm" year, when significant seismicity was not present. We also extended the analysis to all available years to construct a confusion matrix comparing the occurrence of climatological data anomalies with real seismicity. This work confirms the potentiality of multi parameters in anticipating the occurrence of large earthquakes in Central Italy, thus reinforcing the idea of considering such behaviour an effective tool for an integrated system of future earthquake prediction.

  17. Source Parameters for Moderate Earthquakes in the Zagros Mountains with Implications for the Depth Extent of Seismicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, A; Brazier, R; Nyblade, A

    2009-02-23

    Six earthquakes within the Zagros Mountains with magnitudes between 4.9 and 5.7 have been studied to determine their source parameters. These events were selected for study because they were reported in open catalogs to have lower crustal or upper mantle source depths and because they occurred within an area of the Zagros Mountains where crustal velocity structure has been constrained by previous studies. Moment tensor inversion of regional broadband waveforms have been combined with forward modeling of depth phases on short period teleseismic waveforms to constrain source depths and moment tensors. Our results show that all six events nucleated withinmore » the upper crust (<11 km depth) and have thrust mechanisms. This finding supports other studies that call into question the existence of lower crustal or mantle events beneath the Zagros Mountains.« less

  18. Rupture parameters of the 2003 Zemmouri (Mw 6.8), Algeria, earthquake from joint inversion of interferometric synthetic aperture radar, coastal uplift, and GPS

    USGS Publications Warehouse

    Belabbes, S.; Wicks, Charles; Cakir, Z.; Meghraoui, M.

    2009-01-01

    We study the surface deformation associated with the 21 May 2003 (M w = 6.8) Zemmouri (Algeria) earthquake, the strongest seismic event felt in the Algiers region since 1716. The thrust earthquake mechanism and related surface deformation revealed an average 0.50 m coastal uplift along ??55-km-long coastline. We obtain coseismic interferograms using Envisat advanced synthetic aperture radar (ASAR) (IS2) and RADARSAT standard beam (ST4) data from both the ascending and descending orbits of Envisat satellite, whereas the RADARSAT data proved useful only in the descending mode. While the two RADARSAT interferograms cover the earthquake area, Envisat data cover only the western half of the rupture zone. Although the interferometric synthetic aperture radar (InSAR) coherence in the epicenter area is poor, deformation fringes are observed along the coast in different patches. In the Boumerdes area, the maximum coseismic deformation is indicated by the high gradient of fringes visible in all interferograms in agreement with field measurements (tape, differential GPS, leveling, and GPS). To constrain the earthquake rupture parameters, we model the interferograms and uplift measurements using elastic dislocations on triangular fault patches in an elastic and homogeneous half-space. We invert the coseismic slip using first, a planar surface and second, a curved fault, both constructed from triangular elements using Poly3Dinv program that uses a damped least square minimization. The best fit of InSAR, coastal uplift, and GPS data corresponds to a 65-km-long fault rupture dipping 40?? to 50?? SE, located at 8 to 13 km offshore with a change in strike west of Boumerdes from N60??-65?? to N95??-105??. The inferred rupture geometry at depth correlates well with the seismological results and may have critical implications for the seismic hazard assessment of the Algiers region. Copyright 2009 by the American Geophysical Union.

  19. Ionospheric anomalies detected by ionosonde and possibly related to crustal earthquakes in Greece

    NASA Astrophysics Data System (ADS)

    Perrone, Loredana; De Santis, Angelo; Abbattista, Cristoforo; Alfonsi, Lucilla; Amoruso, Leonardo; Carbone, Marianna; Cesaroni, Claudio; Cianchini, Gianfranco; De Franceschi, Giorgiana; De Santis, Anna; Di Giovambattista, Rita; Marchetti, Dedalo; Pavòn-Carrasco, Francisco J.; Piscini, Alessandro; Spogli, Luca; Santoro, Francesca

    2018-03-01

    Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003-2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h'Es, foEs) and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquake prediction is finally discussed.

  20. Earthquake Relocation in the Middle East with Geodetically-Calibrated Events

    NASA Astrophysics Data System (ADS)

    Brengman, C.; Barnhart, W. D.

    2017-12-01

    Regional and global earthquake catalogs in tectonically active regions commonly contain mislocated earthquakes that impede efforts to address first order characteristics of seismogenic strain release and to monitor anthropogenic seismic events through the Comprehensive Nuclear-Test-Ban Treaty. Earthquake mislocations are particularly limiting in the plate boundary zone between the Arabia and Eurasia plates of Iran, Pakistan, and Turkey where earthquakes are commonly mislocated by 20+ kilometers and hypocentral depths are virtually unconstrained. Here, we present preliminary efforts to incorporate calibrated earthquake locations derived from Interferometric Synthetic Aperture Radar (InSAR) observations into a relocated catalog of seismicity in the Middle East. We use InSAR observations of co-seismic deformation to determine the locations, geometries, and slip distributions of small to moderate magnitude (M4.8+) crustal earthquakes. We incorporate this catalog of calibrated event locations, along with other seismologically-calibrated earthquake locations, as "priors" into a fully Bayesian multi-event relocation algorithm that relocates all teleseismically and regionally recorded earthquakes over the time span 1970-2017, including calibrated and uncalibrated events. Our relocations are conducted using cataloged phase picks and BayesLoc. We present a suite of sensitivity tests for the time span of 2003-2014 to explore the impacts of our input parameters (i.e., how a point source is defined from a finite fault inversion) on the behavior of the event relocations, potential improvements to depth estimates, the ability of the relocation to recover locations outside of the time span in which there are InSAR observations, and the degree to which our relocations can recover "known" calibrated earthquake locations that are not explicitly included as a-priori constraints. Additionally, we present a systematic comparison of earthquake relocations derived from phase picks of two

  1. Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms

    NASA Astrophysics Data System (ADS)

    Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng

    2013-04-01

    Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.

  2. Missing great earthquakes

    USGS Publications Warehouse

    Hough, Susan E.

    2013-01-01

    The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

  3. Historical earthquakes studies in Eastern Siberia: State-of-the-art and plans for future

    NASA Astrophysics Data System (ADS)

    Radziminovich, Ya. B.; Shchetnikov, A. A.

    2013-01-01

    Many problems in investigating historical seismicity of East Siberia remain unsolved. A list of these problems may refer particularly to the quality and reliability of data sources, completeness of parametric earthquake catalogues, and precision and transparency of estimates for the main parameters of historical earthquakes. The main purpose of this paper is to highlight the current status of the studies of historical seismicity in Eastern Siberia, as well as analysis of existing macroseismic and parametric earthquake catalogues. We also made an attempt to identify the main shortcomings of existing catalogues and to clarify the reasons for their appearance in the light of the history of seismic observations in Eastern Siberia. Contentious issues in the catalogues of earthquakes are considered by the example of three strong historical earthquakes, important for assessing seismic hazard in the region. In particular, it was found that due to technical error the parameters of large M = 7.7 earthquakes of 1742 were transferred from the regional catalogue to the worldwide database with incorrect epicenter coordinates. The way some stereotypes concerning active tectonics influences on the localization of the epicenter is shown by the example of a strong М = 6.4 earthquake of 1814. Effect of insufficient use of the primary data source on completeness of earthquake catalogues is illustrated by the example of a strong M = 7.0 event of 1859. Analysis of the state-of-the-art of historical earthquakes studies in Eastern Siberia allows us to propose the following activities in the near future: (1) database compilation including initial descriptions of macroseismic effects with reference to their place and time of occurrence; (2) parameterization of the maximum possible (magnitude-unlimited) number of historical earthquakes on the basis of all the data available; (3) compilation of an improved version of the parametric historical earthquake catalogue for East Siberia with

  4. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com; Afnimar,; Puspito, Nanang T.

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994more » Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.« less

  5. Anomalous behavior of the ionosphere before strong earthquakes

    NASA Astrophysics Data System (ADS)

    Peddi Naidu, P.; Madhavi Latha, T.; Madhusudhana Rao, D. N.; Indira Devi, M.

    2017-12-01

    In the recent years, the seismo-ionospheric coupling has been studied using various ionospheric parameters like Total Electron Content, Critical frequencies, Electron density and Phase and amplitude of Very Low Frequency waves. The present study deals with the behavior of the ionosphere in the pre-earthquake period of 3-4 days at various stations adopting the critical frequencies of Es and F2 layers. The relative phase measurements of 16 kHz VLF wave transmissions from Rugby (UK), received at Visakhapatnam (India) are utilized to study the D-region during the seismically active periods. The results show that, f0Es increases a few hours before the time of occurrence of the earthquake and day time values f0F2 are found to be high during the sunlit hours in the pre-earthquake period of 2-3 days. Anomalous VLF phase fluctuations are observed during the sunset hours before the earthquake event. The results are discussed in the light of the probable mechanism proposed by previous investigators.

  6. Seismicity and stress transfer studies in eastern California and Nevada: Implications for earthquake sources and tectonics

    NASA Astrophysics Data System (ADS)

    Ichinose, Gene Aaron

    The source parameters for eastern California and western Nevada earthquakes are estimated from regionally recorded seismograms using a moment tensor inversion. We use the point source approximation and fit the seismograms, at long periods. We generated a moment tensor catalog for Mw > 4.0 since 1997 and Mw > 5.0 since 1990. The catalog includes centroid depths, seismic moments, and focal mechanisms. The regions with the most moderate sized earthquakes in the last decade were in aftershock zones located in Eureka Valley, Double Spring Flat, Coso, Ridgecrest, Fish Lake Valley, and Scotty's Junction. The remaining moderate size earthquakes were distributed across the region. The 1993 (Mw 6.0) Eureka Valley earthquake occurred in the Eastern California Shear Zone. Careful aftershock relocations were used to resolve structure from aftershock clusters. The mainshock appears to rupture along the western side of the Last Change Range along a 30° to 60° west dipping fault plane, consistent with previous geodetic modeling. We estimate the source parameters for aftershocks at source-receiver distances less than 20 km using waveform modeling. The relocated aftershocks and waveform modeling results do not indicate any significant evidence of low angle faulting (dips > 30°. The results did reveal deformation along vertical faults within the hanging-wall block, consistent with observed surface rupture along the Saline Range above the dipping fault plane. The 1994 (Mw 5.8) Double Spring Flat earthquake occurred along the eastern Sierra Nevada between overlapping normal faults. Aftershock migration and cross fault triggering occurred in the following two years, producing seventeen Mw > 4 aftershocks The source parameters for the largest aftershocks were estimated from regionally recorded seismograms using moment tensor inversion. We estimate the source parameters for two moderate sized earthquakes which occurred near Reno, Nevada, the 1995 (Mw 4.4) Border Town, and the 1998 (Mw

  7. Analog earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, R.B.

    1995-09-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

  8. Fractals and Forecasting in Earthquakes and Finance

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  9. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    NASA Astrophysics Data System (ADS)

    Plotkin, V. V.

    A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with that of investigators, which studied these phenomena with several observational techniques. However it is possible, that such TEC changes are simultaneously accompanied by changes due to solar wind parameters and Kp -index.

  10. Recent Mega-Thrust Tsunamigenic Earthquakes and PTHA

    NASA Astrophysics Data System (ADS)

    Lorito, S.

    2013-05-01

    , despite different methods like event trees have been used for different applications. I will define a quite general PTHA framework, based on the mixed use of logic and event trees. I will first discuss a particular class of epistemic uncertainties, i.e. those related to the parametric fault characterization in terms of geometry, kinematics, and assessment of activity rates. A systematic classification in six justification levels of epistemic uncertainty related with the existence and behaviour of fault sources will be presented. Then, a particular branch of the logic tree is chosen in order to discuss just the aleatory variability of earthquake parameters, represented with an event tree. Even so, PTHA based on numerical scenarios is a too demanding computational task, particularly when probabilistic inundation maps are needed. For trying to reduce the computational burden without under-representing the source variability, the event tree is first constructed by taking care of densely (over-)sampling the earthquake parameter space, and then the earthquakes are filtered basing on their associated tsunami impact offshore, before calculating inundation maps. I'll describe this approach by means of a case study in the Mediterranean Sea, namely the PTHA for some locations of Eastern Sicily coasts and Southern Crete coast due to potential subduction earthquakes occurring on the Hellenic Arc.

  11. The Mw=8.8 Maule earthquake aftershock sequence, event catalog and locations

    NASA Astrophysics Data System (ADS)

    Meltzer, A.; Benz, H.; Brown, L.; Russo, R. M.; Beck, S. L.; Roecker, S. W.

    2011-12-01

    The aftershock sequence of the Mw=8.8 Maule earthquake off the coast of Chile in February 2010 is one of the most well-recorded aftershock sequences from a great megathrust earthquake. Immediately following the Maule earthquake, teams of geophysicists from Chile, France, Germany, Great Britain and the United States coordinated resources to capture aftershocks and other seismic signals associated with this significant earthquake. In total, 91 broadband, 48 short period, and 25 accelerometers stations were deployed above the rupture zone of the main shock from 33-38.5°S and from the coast to the Andean range front. In order to integrate these data into a unified catalog, the USGS National Earthquake Information Center develop procedures to use their real-time seismic monitoring system (Bulletin Hydra) to detect, associate, location and compute earthquake source parameters from these stations. As a first step in the process, the USGS has built a seismic catalog of all M3.5 or larger earthquakes for the time period of the main aftershock deployment from March 2010-October 2010. The catalog includes earthquake locations, magnitudes (Ml, Mb, Mb_BB, Ms, Ms_BB, Ms_VX, Mc), associated phase readings and regional moment tensor solutions for most of the M4 or larger events. Also included in the catalog are teleseismic phases and amplitude measures and body-wave MT and CMT solutions for the larger events, typically M5.5 and larger. Tuning of automated detection and association parameters should allow a complete catalog of events to approximately M2.5 or larger for that dataset of more than 164 stations. We characterize the aftershock sequence in terms of magnitude, frequency, and location over time. Using the catalog locations and travel times as a starting point we use double difference techniques to investigate relative locations and earthquake clustering. In addition, phase data from candidate ground truth events and modeling of surface waves can be used to calibrate the

  12. Electromagnetic Energy Released in the Subduction (Benioff) Zone in Weeks Previous to Earthquake Occurrence in Central Peru and the Estimation of Earthquake Magnitudes.

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Centa, V. A.; Bleier, T.

    2017-12-01

    During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event

  13. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    NASA Astrophysics Data System (ADS)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  14. Rapid acceleration leads to rapid weakening in earthquake-like laboratory experiments

    USGS Publications Warehouse

    Chang, Jefferson C.; Lockner, David A.; Reches, Z.

    2012-01-01

    After nucleation, a large earthquake propagates as an expanding rupture front along a fault. This front activates countless fault patches that slip by consuming energy stored in Earth’s crust. We simulated the slip of a fault patch by rapidly loading an experimental fault with energy stored in a spinning flywheel. The spontaneous evolution of strength, acceleration, and velocity indicates that our experiments are proxies of fault-patch behavior during earthquakes of moment magnitude (Mw) = 4 to 8. We show that seismically determined earthquake parameters (e.g., displacement, velocity, magnitude, or fracture energy) can be used to estimate the intensity of the energy release during an earthquake. Our experiments further indicate that high acceleration imposed by the earthquake’s rupture front quickens dynamic weakening by intense wear of the fault zone.

  15. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    NASA Astrophysics Data System (ADS)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  16. Understanding earthquake from the granular physics point of view — Causes of earthquake, earthquake precursors and predictions

    NASA Astrophysics Data System (ADS)

    Lu, Kunquan; Hou, Meiying; Jiang, Zehui; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

  17. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  18. Redefining Earthquakes and the Earthquake Machine

    ERIC Educational Resources Information Center

    Hubenthal, Michael; Braile, Larry; Taber, John

    2008-01-01

    The Earthquake Machine (EML), a mechanical model of stick-slip fault systems, can increase student engagement and facilitate opportunities to participate in the scientific process. This article introduces the EML model and an activity that challenges ninth-grade students' misconceptions about earthquakes. The activity emphasizes the role of models…

  19. Source parameters for small events associated with the 1986 North Palm Springs, California, earthquake determined using empirical Green functions

    USGS Publications Warehouse

    Mori, J.; Frankel, A.

    1990-01-01

    Using small events as empirical Green functions, source parameters were estimated for 25 ML 3.4 to 4.4 events associated with the 1986 North Palm Springs earthquake. The static stress drops ranged from 3 to 80 bars, for moments of 0.7 to 11 ?? 1021 dyne-cm. There was a spatial pattern to the stress drops of the aftershocks which showed increasing values along the fault plane toward the northwest compared to relatively low values near the hypocenter of the mainshock. The highest values were outside the main area of slip, and are believed to reflect a loaded area of the fault that still has an higher level of stress which was not released during the main shock. -from Authors

  20. Simulate earthquake cycles on the oceanic transform faults in the framework of rate-and-state friction

    NASA Astrophysics Data System (ADS)

    Wei, M.

    2016-12-01

    Progress towards a quantitative and predictive understanding of the earthquake behavior can be achieved by improved understanding of earthquake cycles. However, it is hindered by the long repeat times (100s to 1000s of years) of the largest earthquakes on most faults. At fast-spreading oceanic transform faults, the typical repeating time ranges from 5-20 years, making them a unique tectonic environment for studying the earthquake cycle. One important observation on OTFs is the quasi-periodicity and the spatial-temporal clustering of large earthquakes: same fault segment ruptured repeatedly at a near constant interval and nearby segments ruptured during a short time period. This has been observed on the Gofar and Discovery faults in the East Pacific Rise. Between 1992 and 2014, five clusters of M6 earthquakes occurred on the Gofar and Discovery fault system with recurrence intervals of 4-6 years. Each cluster consisted of a westward migration of seismicity from the Discovery to Gofar segment within a 2-year period, providing strong evidence for spatial-temporal clustering of large OTFs earthquakes. I simulated earthquake cycles of oceanic transform fault in the framework of rate-and-state friction, motivated by the observations at the Gofar and Discovery faults. I focus on a model with two seismic segments, each 20 km long and 5 km wide, separated by an aseismic segment of 10 km wide. This geometry is set based on aftershock locations of the 2008 M6.0 earthquake on Gofar. The repeating large earthquake on both segments are reproduced with similar magnitude as observed. I set the state parameter differently for the two seismic segments so initially they are not synchornized. Results also show that synchronization of the two seismic patches can be achieved after several earthquake cycles when the effective normal stress or the a-b parameter is smaller than surrounding aseismic areas, both having reduced the resistance to seismic rupture in the VS segment. These

  1. Coulomb Failure Stress Accumulation in Nepal After the 2015 Mw 7.8 Gorkha Earthquake: Testing Earthquake Triggering Hypothesis and Evaluating Seismic Hazards

    NASA Astrophysics Data System (ADS)

    Xiong, N.; Niu, F.

    2017-12-01

    A Mw 7.8 earthquake struck Gorkha, Nepal, on April 5, 2015, resulting in more than 8000 deaths and 3.5 million homeless. The earthquake initiated 70km west of Kathmandu and propagated eastward, rupturing an area of approximately 150km by 60km in size. However, the earthquake failed to fully rupture the locked fault beneath the Himalaya, suggesting that the region south of Kathmandu and west of the current rupture are still locked and a much more powerful earthquake might occur in future. Therefore, the seismic hazard of the unruptured region is of great concern. In this study, we investigated the Coulomb failure stress (CFS) accumulation on the unruptured fault transferred by the Gorkha earthquake and some nearby historical great earthquakes. First, we calculated the co-seismic CFS changes of the Gorkha earthquake on the nodal planes of 16 large aftershocks to quantitatively examine whether they were brought closer to failure by the mainshock. It is shown that at least 12 of the 16 aftershocks were encouraged by an increase of CFS of 0.1-3 MPa. The correspondence between the distribution of off-fault aftershocks and the increased CFS pattern also validates the applicability of the earthquake triggering hypothesis in the thrust regime of Nepal. With the validation as confidence, we calculated the co-seismic CFS change on the locked region imparted by the Gorkha earthquake and historical great earthquakes. A newly proposed ramp-flat-ramp-flat fault geometry model was employed, and the source parameters of historical earthquakes were computed with the empirical scaling relationship. A broad region south of the Kathmandu and west of the current rupture were shown to be positively stressed with CFS change roughly ranging between 0.01 and 0.5 MPa. The maximum of CFS increase (>1MPa) was found in the updip segment south of the current rupture, implying a high seismic hazard. Since the locked region may be additionally stressed by the post-seismic relaxation of the lower

  2. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding

  3. OMG Earthquake! Can Twitter improve earthquake response?

    NASA Astrophysics Data System (ADS)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  4. Tests of remote aftershock triggering by small mainshocks using Taiwan's earthquake catalog

    NASA Astrophysics Data System (ADS)

    Peng, W.; Toda, S.

    2014-12-01

    To understand earthquake interaction and forecast time-dependent seismic hazard, it is essential to evaluate which stress transfer, static or dynamic, plays a major role to trigger aftershocks and subsequent mainshocks. Felzer and Brodsky focused on small mainshocks (2≤M<3) and their aftershocks, and then argued that only dynamic stress change brings earthquake-to-earthquake triggering, whereas Richards-Dingers et al. (2010) claimed that those selected small mainshock-aftershock pairs were not earthquake-to-earthquake triggering but simultaneous occurrence of independent aftershocks following a larger earthquake or during a significant swarm sequence. We test those hypotheses using Taiwan's earthquake catalog by taking the advantage of lacking any larger event and the absence of significant seismic swarm typically seen with active volcano. Using Felzer and Brodsky's method and their standard parameters, we only found 14 mainshock-aftershock pairs occurred within 20 km distance in Taiwan's catalog from 1994 to 2010. Although Taiwan's catalog has similar number of earthquakes as California's, the number of pairs is about 10% of the California catalog. It may indicate the effect of no large earthquakes and no significant seismic swarm in the catalog. To fully understand the properties in the Taiwan's catalog, we loosened the screening parameters to earn more pairs and then found a linear aftershock density with a power law decay of -1.12±0.38 that is very similar to the one in Felzer and Brodsky. However, none of those mainshock-aftershock pairs were associated with a M7 rupture event or M6 events. To find what mechanism controlled the aftershock density triggered by small mainshocks in Taiwan, we randomized earthquake magnitude and location. We then found that those density decay in a short time period is more like a randomized behavior than mainshock-aftershock triggering. Moreover, 5 out of 6 pairs were found in a swarm-like temporal seismicity rate increase

  5. Synthetic Earthquake Statistics From Physical Fault Models for the Lower Rhine Embayment

    NASA Astrophysics Data System (ADS)

    Brietzke, G. B.; Hainzl, S.; Zöller, G.

    2012-04-01

    As of today, seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates they fail to provide a link between the observed seismicity and the underlying physical processes. Solving a state-of-the-art fully dynamic description set of all relevant physical processes related to earthquake fault systems is likely not useful since it comes with a large number of degrees of freedom, poor constraints on its model parameters and a huge computational effort. Here, quasi-static and quasi-dynamic physical fault simulators provide a compromise between physical completeness and computational affordability and aim at providing a link between basic physical concepts and statistics of seismicity. Within the framework of quasi-static and quasi-dynamic earthquake simulators we investigate a model of the Lower Rhine Embayment (LRE) that is based upon seismological and geological data. We present and discuss statistics of the spatio-temporal behavior of generated synthetic earthquake catalogs with respect to simplification (e.g. simple two-fault cases) as well as to complication (e.g. hidden faults, geometric complexity, heterogeneities of constitutive parameters).

  6. Teleseismically recorded seismicity before and after the May 7, 1986, Andreanof Islands, Alaska, earthquake

    USGS Publications Warehouse

    Engdahl, E.R.; Billington, S.; Kisslinger, C.

    1989-01-01

    The Andreanof Islands earthquake (Mw 8.0) is the largest event to have occurred in that section of the Aleutian arc since the March 9, 1957, Aleutian Islands earthquake (Mw 8.6). Teleseismically well-recorded earthquakes in the region of the 1986 earthquake are relocated with a plate model and with careful attention to the focal depths. The data set is nearly complete for mb???4.7 between longitudes 172??W and 179??W for the period 1964 through April 1987 and provides a detailed description of the space-time history of moderate-size earthquakes in the region for that period. Additional insight is provided by source parameters which have been systematically determined for Mw???5 earthquakes that occurred in the region since 1977 and by a modeling study of the spatial distribution of moment release on the mainshock fault plane. -from Authors

  7. Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi

    2018-05-01

    Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.

  8. Sandpile-based model for capturing magnitude distributions and spatiotemporal clustering and separation in regional earthquakes

    NASA Astrophysics Data System (ADS)

    Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.

    2017-04-01

    We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.

  9. Multi-Sensor Observations of Earthquake Related Atmospheric Signals over Major Geohazard Validation Sites

    NASA Technical Reports Server (NTRS)

    Ouzounov, D.; Pulinets, S.; Davindenko, D.; Hattori, K.; Kafatos, M.; Taylor, P.

    2012-01-01

    We are conducting a scientific validation study involving multi-sensor observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several atmospheric and environmental parameters, which we found, are associated with the earthquakes, namely: thermal infrared radiation, outgoing long-wavelength radiation, ionospheric electron density, and atmospheric temperature and humidity. For first time we applied this approach to selected GEOSS sites prone to earthquakes or volcanoes. This provides a new opportunity to cross validate our results with the dense networks of in-situ and space measurements. We investigated two different seismic aspects, first the sites with recent large earthquakes, viz.- Tohoku-oki (M9, 2011, Japan) and Emilia region (M5.9, 2012,N. Italy). Our retrospective analysis of satellite data has shown the presence of anomalies in the atmosphere. Second, we did a retrospective analysis to check the re-occurrence of similar anomalous behavior in atmosphere/ionosphere over three regions with distinct geological settings and high seismicity: Taiwan, Japan and Kamchatka, which include 40 major earthquakes (M>5.9) for the period of 2005-2009. We found anomalous behavior before all of these events with no false negatives; false positives were less then 10%. Our initial results suggest that multi-instrument space-borne and ground observations show a systematic appearance of atmospheric anomalies near the epicentral area that could be explained by a coupling between the observed physical parameters and earthquake preparation processes.

  10. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  11. A New Network-Based Approach for the Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.

    2017-12-01

    Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of

  12. The August 2011 Virginia and Colorado Earthquake Sequences: Does Stress Drop Depend on Strain Rate?

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.; Viegas, G.

    2011-12-01

    Our preliminary analysis of the August 2011 Virginia earthquake sequence finds the earthquakes to have high stress drops, similar to those of recent earthquakes in NE USA, while those of the August 2011 Trinidad, Colorado, earthquakes are moderate - in between those typical of interplate (California) and the east coast. These earthquakes provide an unprecedented opportunity to study such source differences in detail, and hence improve our estimates of seismic hazard. Previously, the lack of well-recorded earthquakes in the eastern USA severely limited our resolution of the source processes and hence the expected ground accelerations. Our preliminary findings are consistent with the idea that earthquake faults strengthen during longer recurrence times and intraplate faults fail at higher stress (and produce higher ground accelerations) than their interplate counterparts. We use the empirical Green's function (EGF) method to calculate source parameters for the Virginia mainshock and three larger aftershocks, and for the Trinidad mainshock and two larger foreshocks using IRIS-available stations. We select time windows around the direct P and S waves at the closest stations and calculate spectral ratios and source time functions using the multi-taper spectral approach (eg. Viegas et al., JGR 2010). Our preliminary results show that the Virginia sequence has high stress drops (~100-200 MPa, using Madariaga (1976) model), and the Colorado sequence has moderate stress drops (~20 MPa). These numbers are consistent with previous work in the regions, for example the Au Sable Forks (2002) earthquake, and the 2010 Germantown (MD) earthquake. We also calculate the radiated seismic energy and find the energy/moment ratio to be high for the Virginia earthquakes, and moderate for the Colorado sequence. We observe no evidence of a breakdown in constant stress drop scaling in this limited number of earthquakes. We extend our analysis to a larger number of earthquakes and stations

  13. Using structures of the August 24, 2016 Amatrice earthquake affected area as seismoscopes for assessing ground motion characteristics and parameters of the main shock and its largest aftershocks

    NASA Astrophysics Data System (ADS)

    Carydis, Panayotis; Lekkas, Efthymios; Mavroulis, Spyridon

    2017-04-01

    induced by the studied earthquakes indicated the predominant effect of the vertical ground motion on buildings based on already reported building damage induced by recent destructive events in the Mediterranean region, (c) the conventional dynamic parameters of buildings did not play a significant role in their seismic response against the vertical component, due to its impact type of loading, (d) structures and materials presented similar response to ground motions almost independent from type and existing quality, and carried memories from previous large shocks of this sequence, (e) the main shock and its largest aftershocks caused building damage including spatial homothetic motions that reached statistically significant levels, it is concluded that the main shock and its largest aftershocks had similar focal mechanism parameters (normal faulting), were shallow events and were near-field earthquakes with short duration but high amplitude and the vertical component of the earthquakes' ground motion has prevailed. The aforementioned approach based solely on macroseismic observations was applied in the case of the 1755 Great Lisbon earthquake in order to determine its mechanism and epicenter location. Thus, it is suggested that the aforementioned methodology can be applied either in past historic earthquakes or complementarily in cases when the available seismological data are insufficient.

  14. Twitter earthquake detection: Earthquake monitoring in a social world

    USGS Publications Warehouse

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  15. Earthquake source parameters determined using the SAFOD Pilot Hole vertical seismic array

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Ellsworth, W. L.; Prejean, S. G.

    2003-12-01

    model of Sato and Hirasawa (1973). Estimated values of static stress drop were roughly 1 MPa and do not vary with seismic moment. Q values from all earthquakes were averaged at each level of the array. Average Qp and Qs range from 250 to 350 and from 300 to 400 between the top and bottom of the array, respectively. Increasing Q values as a function of depth explain well the observed decrease in high-frequency content as waves propagate toward the surface. Thus, by jointly analyzing the entire vertical array we can both accurately determine source parameters of microearthquakes and make reliable Q estimates while suppressing the trade-off between fc and Q.

  16. Earthquake Loss Scenarios: Warnings about the Extent of Disasters

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Tolis, S.; Rosset, P.

    2016-12-01

    It is imperative that losses expected due to future earthquakes be estimated. Officials and the public need to be aware of what disaster is likely in store for them in order to reduce the fatalities and efficiently help the injured. Scenarios for earthquake parameters can be constructed to a reasonable accuracy in highly active earthquake belts, based on knowledge of seismotectonics and history. Because of the inherent uncertainties of loss estimates however, it would be desirable that more than one group calculate an estimate for the same area. By discussing these estimates, one may find a consensus of the range of the potential disasters and persuade officials and residents of the reality of the earthquake threat. To model a scenario and estimate earthquake losses requires data sets that are sufficiently accurate of the number of people present, the built environment, and if possible the transmission of seismic waves. As examples we use loss estimates for possible repeats of historic earthquakes in Greece that occurred between -464 and 700. We model future large Greek earthquakes as having M6.8 and rupture lengths of 60 km. In four locations where historic earthquakes with serious losses have occurred, we estimate that 1,000 to 1,500 people might perish, with an additional factor of four people injured. Defining the area of influence of these earthquakes as that with shaking intensities larger and equal to V, we estimate that 1.0 to 2.2 million people in about 2,000 settlements may be affected. We calibrate the QLARM tool for calculating intensities and losses in Greece, using the M6, 1999 Athens earthquake and matching the isoseismal information for six earthquakes, which occurred in Greece during the last 140 years. Comparing fatality numbers that would occur theoretically today with the numbers reported, and correcting for the increase in population, we estimate that the improvement of the building stock has reduced the mortality and injury rate in Greek

  17. Computer simulation of earthquakes

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1976-01-01

    Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.

  18. Prompt identification of tsunamigenic earthquakes from 3-component seismic data

    NASA Astrophysics Data System (ADS)

    Kundu, Ajit; Bhadauria, Y. S.; Basu, S.; Mukhopadhyay, S.

    2016-10-01

    An Artificial Neural Network (ANN) based algorithm for prompt identification of shallow focus (depth < 70 km) tsunamigenic earthquakes at a regional distance is proposed in the paper. The promptness here refers to decision making as fast as 5 min after the arrival of LR phase in the seismogram. The root mean square amplitudes of seismic phases recorded by a single 3-component station have been considered as inputs besides location and magnitude. The trained ANN has been found to categorize 100% of the new earthquakes successfully as tsunamigenic or non-tsunamigenic. The proposed method has been corroborated by an alternate mapping technique of earthquake category estimation. The second method involves computation of focal parameters, estimation of water volume displaced at the source and eventually deciding category of the earthquake. The method has been found to identify 95% of the new earthquakes successfully. Both the methods have been tested using three component broad band seismic data recorded at PALK (Pallekele, Sri Lanka) station provided by IRIS for earthquakes originating from Sumatra region of magnitude 6 and above. The fair agreement between the methods ensures that a prompt alert system could be developed based on proposed method. The method would prove to be extremely useful for the regions that are not adequately instrumented for azimuthal coverage.

  19. Long-term predictability of regions and dates of strong earthquakes

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  20. Crustal earthquake triggering by pre-historic great earthquakes on subduction zone thrusts

    USGS Publications Warehouse

    Sherrod, Brian; Gomberg, Joan

    2014-01-01

    Triggering of earthquakes on upper plate faults during and shortly after recent great (M>8.0) subduction thrust earthquakes raises concerns about earthquake triggering following Cascadia subduction zone earthquakes. Of particular regard to Cascadia was the previously noted, but only qualitatively identified, clustering of M>~6.5 crustal earthquakes in the Puget Sound region between about 1200–900 cal yr B.P. and the possibility that this was triggered by a great Cascadia thrust subduction thrust earthquake, and therefore portends future such clusters. We confirm quantitatively the extraordinary nature of the Puget Sound region crustal earthquake clustering between 1200–900 cal yr B.P., at least over the last 16,000. We conclude that this cluster was not triggered by the penultimate, and possibly full-margin, great Cascadia subduction thrust earthquake. However, we also show that the paleoseismic record for Cascadia is consistent with conclusions of our companion study of the global modern record outside Cascadia, that M>8.6 subduction thrust events have a high probability of triggering at least one or more M>~6.5 crustal earthquakes.

  1. Hazard assessment of long-period ground motions for the Nankai Trough earthquakes

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.

    2013-12-01

    We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and

  2. A Poisson method application to the assessment of the earthquake hazard in the North Anatolian Fault Zone, Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Türker, Tuğba, E-mail: tturker@ktu.edu.tr; Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr

    North Anatolian Fault (NAF) is one from the most important strike-slip fault zones in the world and located among regions in the highest seismic activity. The NAFZ observed very large earthquakes from the past to present. The aim of this study; the important parameters of Gutenberg-Richter relationship (a and b values) estimated and this parameters taking into account, earthquakes were examined in the between years 1900-2015 for 10 different seismic source regions in the NAFZ. After that estimated occurrence probabilities and return periods of occurring earthquakes in fault zone in the next years, and is being assessed with Poisson methodmore » the earthquake hazard of the NAFZ. The Region 2 were observed the largest earthquakes for the only historical period and hasn’t been observed large earthquake for the instrumental period in this region. Two historical earthquakes (1766, M{sub S}=7.3 and 1897, M{sub S}=7.0) are included for Region 2 (Marmara Region) where a large earthquake is expected in the next years. The 10 different seismic source regions are determined the relationships between the cumulative number-magnitude which estimated a and b parameters with the equation of LogN=a-bM in the Gutenberg-Richter. A homogenous earthquake catalog for M{sub S} magnitude which is equal or larger than 4.0 is used for the time period between 1900 and 2015. The database of catalog used in the study has been created from International Seismological Center (ISC) and Boğazici University Kandilli observation and earthquake research institute (KOERI). The earthquake data were obtained until from 1900 to 1974 from KOERI and ISC until from 1974 to 2015 from KOERI. The probabilities of the earthquake occurring are estimated for the next 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 years in the 10 different seismic source regions. The highest earthquake occur probabilities in 10 different seismic source regions in the next years estimated that the region Tokat-Erzincan (Region 9

  3. Crustal deformation in Great California Earthquake cycles

    NASA Technical Reports Server (NTRS)

    Li, Victor C.; Rice, James R.

    1987-01-01

    A model in which coupling is described approximately through a generalized Elsasser model is proposed for computation of the periodic crustal deformation associated with repeated strike-slip earthquakes. The model is found to provide a more realistic physical description of tectonic loading than do simpler kinematic models. Parameters are chosen to model the 1857 and 1906 San Andreas ruptures, and predictions are found to be consistent with data on variations of contemporary surface strain and displacement rates as a function of distance from the 1857 and 1906 rupture traces. Results indicate that the asthenosphere appropriate to describe crustal deformation on the earthquake cycle time scale lies in the lower crust and perhaps the crust-mantle transition zone.

  4. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and

  5. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  6. Seismogeodesy and Rapid Earthquake and Tsunami Source Assessment

    NASA Astrophysics Data System (ADS)

    Melgar Moctezuma, Diego

    This dissertation presents an optimal combination algorithm for strong motion seismograms and regional high rate GPS recordings. This seismogeodetic solution produces estimates of ground motion that recover the whole seismic spectrum, from the permanent deformation to the Nyquist frequency of the accelerometer. This algorithm will be demonstrated and evaluated through outdoor shake table tests and recordings of large earthquakes, notably the 2010 Mw 7.2 El Mayor-Cucapah earthquake and the 2011 Mw 9.0 Tohoku-oki events. This dissertations will also show that strong motion velocity and displacement data obtained from the seismogeodetic solution can be instrumental to quickly determine basic parameters of the earthquake source. We will show how GPS and seismogeodetic data can produce rapid estimates of centroid moment tensors, static slip inversions, and most importantly, kinematic slip inversions. Throughout the dissertation special emphasis will be placed on how to compute these source models with minimal interaction from a network operator. Finally we will show that the incorporation of off-shore data such as ocean-bottom pressure and RTK-GPS buoys can better-constrain the shallow slip of large subduction events. We will demonstrate through numerical simulations of tsunami propagation that the earthquake sources derived from the seismogeodetic and ocean-based sensors is detailed enough to provide a timely and accurate assessment of expected tsunami intensity immediately following a large earthquake.

  7. Monitoring the Dead Sea Region by Multi-Parameter Stations

    NASA Astrophysics Data System (ADS)

    Mohsen, A.; Weber, M. H.; Kottmeier, C.; Asch, G.

    2015-12-01

    The Dead Sea Region is an exceptional ecosystem whose seismic activity has influenced all facets of the development, from ground water availability to human evolution. Israelis, Palestinians and Jordanians living in the Dead Sea region are exposed to severe earthquake hazard. Repeatedly large earthquakes (e.g. 1927, magnitude 6.0; (Ambraseys, 2009)) shook the whole Dead Sea region proving that earthquake hazard knows no borders and damaging seismic events can strike anytime. Combined with the high vulnerability of cities in the region and with the enormous concentration of historical values this natural hazard results in an extreme earthquake risk. Thus, an integration of earthquake parameters at all scales (size and time) and their combination with data of infrastructure are needed with the specific aim of providing a state-of-the-art seismic hazard assessment for the Dead Sea region as well as a first quantitative estimate of vulnerability and risk. A strong motivation for our research is the lack of reliable multi-parameter ground-based geophysical information on earthquakes in the Dead Sea region. The proposed set up of a number of observatories with on-line data access will enable to derive the present-day seismicity and deformation pattern in the Dead Sea region. The first multi-parameter stations were installed in Jordan, Israel and Palestine for long-time monitoring. All partners will jointly use these locations. All stations will have an open data policy, with the Deutsches GeoForschungsZentrum (GFZ, Potsdam, Germany) providing the hard and software for real-time data transmission via satellite to Germany, where all partners can access the data via standard data protocols.

  8. Evaluation of the real-time earthquake information system in Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, Hiromitsu; Horiuchi, Shigeki; Wu, Changjiang; Yamamoto, Shunroku; Rydelek, Paul A.

    2009-01-01

    The real-time earthquake information system (REIS) of the Japanese seismic network is developed for automatically determining earthquake parameters within a few seconds after the P-waves arrive at the closest stations using both the P-wave arrival times and the timing data that P-waves have not yet arrived at other stations. REIS results play a fundamental role in the real-time information for earthquake early warning in Japan. We show the rapidity and accuracy of REIS from the analysis of 4,050 earthquakes in three years since 2005; 44 percent of the first reports are issued within 5 seconds after the first P-wave arrival and 80 percent of the events have a difference in epicenter distance less than 20 km relative to manually determined locations. We compared the formal catalog to the estimated magnitude from the real-time analysis and found that 94 percent of the events had a magnitude difference of +/-1.0 unit.

  9. Impact of earthquakes on sex ratio at birth: Eastern Marmara earthquakes

    PubMed Central

    Doğer, Emek; Çakıroğlu, Yiğit; Köpük, Şule Yıldırım; Ceylan, Yasin; Şimşek, Hayal Uzelli; Çalışkan, Eray

    2013-01-01

    Objective: Previous reports suggest that maternal exposure to acute stress related to earthquakes affects the sex ratio at birth. Our aim was to examine the change in sex ratio at birth after Eastern Marmara earthquake disasters. Material and Methods: This study was performed using the official birth statistics from January 1997 to December 2002 – before and after 17 August 1999, the date of the Golcuk Earthquake – supplied from the Turkey Statistics Institute. The secondary sex ratio was expressed as the male proportion at birth, and the ratio of both affected and unaffected areas were calculated and compared on a monthly basis using data from gender with using the Chi-square test. Results: We observed significant decreases in the secondary sex ratio in the 4th and 8th months following an earthquake in the affected region compared to the unaffected region (p= 0.001 and p= 0.024). In the earthquake region, the decrease observed in the secondary sex ratio during the 8th month after an earthquake was specific to the period after the earthquake. Conclusion: Our study indicated a significant reduction in the secondary sex ratio after an earthquake. With these findings, events that cause sudden intense stress such as earthquakes can have an effect on the sex ratio at birth. PMID:24592082

  10. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  11. Seismological investigation of earthquakes in the New Madrid Seismic Zone. Final report, September 1986--December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrmann, R.B.; Nguyen, B.

    Earthquake activity in the New Madrid Seismic Zone had been monitored by regional seismic networks since 1975. During this time period, over 3,700 earthquakes have been located within the region bounded by latitudes 35{degrees}--39{degrees}N and longitudes 87{degrees}--92{degrees}W. Most of these earthquakes occur within a 1.5{degrees} x 2{degrees} zone centered on the Missouri Bootheel. Source parameters of larger earthquakes in the zone and in eastern North America are determined using surface-wave spectral amplitudes and broadband waveforms for the purpose of determining the focal mechanism, source depth and seismic moment. Waveform modeling of broadband data is shown to be a powerful toolmore » in defining these source parameters when used complementary with regional seismic network data, and in addition, in verifying the correctness of previously published focal mechanism solutions.« less

  12. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    USGS Publications Warehouse

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  13. From integrated observation of pre-earthquake signals towards physical-based forecasting: A prospective test experiment

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Tramutoli, V.; Lee, L.; Liu, J. G.; Hattori, K.; Kafatos, M.

    2013-12-01

    We are conducting an integrated study involving multi-parameter observations over different seismo- tectonics regions in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several selected parameters namely: gas discharge; thermal infrared radiation; ionospheric electron concentration; and atmospheric temperature and humidity, which we suppose are associated with earthquake preparation phase. We intended to test in prospective mode the set of geophysical measurements for different regions of active earthquakes and volcanoes. In 2012-13 we established a collaborative framework with the leading projects PRE-EARTHQUAKE (EU) and iSTEP3 (Taiwan) for coordinate measurements and prospective validation over seven test regions: Southern California (USA), Eastern Honshu (Japan), Italy, Turkey, Greece, Taiwan (ROC), Kamchatka and Sakhalin (Russia). The current experiment provided a 'stress test' opportunity to validate the physical based approach in teal -time over regions of high seismicity. Our initial results are: (1) Prospective tests have shown the presence in real time of anomalies in the atmosphere before most of the significant (M>5.5) earthquakes in all regions; (2) False positive rate alarm is different for each region and varying between 50% (Italy, Kamchatka and California) to 25% (Taiwan and Japan) with a significant reduction of false positives when at least two parameters are contemporary used; (3) One of most complex problem, which is still open, was the systematic collection and real-time integration of pre-earthquake observations. Our findings suggest that the physical based short-term forecast is feasible and more tests are needed. We discus the physical concept we used, the future integration of data observations and related developments.

  14. Sun-earth environment study to understand earthquake prediction

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.

    2007-05-01

    Earthquake prediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquake prediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

  15. Results of meteorological monitoring in Gorny Altai before and after the Chuya earthquake in 2003

    NASA Astrophysics Data System (ADS)

    Aptikaeva, O. I.; Shitov, A. V.

    2014-12-01

    We consider the dynamics of some meteorological parameters in Gorny Altai from 2000 to 2011. We analyzed the variations in the meteorological parameters related to the strong Chuya earthquake (September 27, 2003). A number of anomalies were revealed in the time series. Before this strong earthquake, the winter temperatures at the nearest meteorological station to the earthquake source increased by 8-10°C (by 2009 they returned to the mean values), while the air humidity in winter decreased. In the winter of 2002, we observed a long negative anomaly in the time series of the atmospheric pressure. At the same time, the decrease in the released seismic energy was replaced by the tendency to its increase. Using wavelet analysis we revealed the synchronism in the dynamics of the atmospheric parameters, variations in the solar and geomagnetic activities, and geodynamic processes. We also discuss the relationship of the atmospheric and geodynamic processes and the comfort conditions of the population in the climate analyzed here.

  16. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part B, historical earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.

  17. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a

  18. Earthquake and tsunami forecasts: Relation of slow slip events to subsequent earthquake rupture

    PubMed Central

    Dixon, Timothy H.; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-01-01

    The 5 September 2012 Mw 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr–Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential. PMID:25404327

  19. Earthquake and tsunami forecasts: relation of slow slip events to subsequent earthquake rupture.

    PubMed

    Dixon, Timothy H; Jiang, Yan; Malservisi, Rocco; McCaffrey, Robert; Voss, Nicholas; Protti, Marino; Gonzalez, Victor

    2014-12-02

    The 5 September 2012 M(w) 7.6 earthquake on the Costa Rica subduction plate boundary followed a 62-y interseismic period. High-precision GPS recorded numerous slow slip events (SSEs) in the decade leading up to the earthquake, both up-dip and down-dip of seismic rupture. Deeper SSEs were larger than shallower ones and, if characteristic of the interseismic period, release most locking down-dip of the earthquake, limiting down-dip rupture and earthquake magnitude. Shallower SSEs were smaller, accounting for some but not all interseismic locking. One SSE occurred several months before the earthquake, but changes in Mohr-Coulomb failure stress were probably too small to trigger the earthquake. Because many SSEs have occurred without subsequent rupture, their individual predictive value is limited, but taken together they released a significant amount of accumulated interseismic strain before the earthquake, effectively defining the area of subsequent seismic rupture (rupture did not occur where slow slip was common). Because earthquake magnitude depends on rupture area, this has important implications for earthquake hazard assessment. Specifically, if this behavior is representative of future earthquake cycles and other subduction zones, it implies that monitoring SSEs, including shallow up-dip events that lie offshore, could lead to accurate forecasts of earthquake magnitude and tsunami potential.

  20. Understanding earthquake hazards in urban areas - Evansville Area Earthquake Hazards Mapping Project

    USGS Publications Warehouse

    Boyd, Oliver S.

    2012-01-01

    The region surrounding Evansville, Indiana, has experienced minor damage from earthquakes several times in the past 200 years. Because of this history and the proximity of Evansville to the Wabash Valley and New Madrid seismic zones, there is concern among nearby communities about hazards from earthquakes. Earthquakes currently cannot be predicted, but scientists can estimate how strongly the ground is likely to shake as a result of an earthquake and are able to design structures to withstand this estimated ground shaking. Earthquake-hazard maps provide one way of conveying such information and can help the region of Evansville prepare for future earthquakes and reduce earthquake-caused loss of life and financial and structural loss. The Evansville Area Earthquake Hazards Mapping Project (EAEHMP) has produced three types of hazard maps for the Evansville area: (1) probabilistic seismic-hazard maps show the ground motion that is expected to be exceeded with a given probability within a given period of time; (2) scenario ground-shaking maps show the expected shaking from two specific scenario earthquakes; (3) liquefaction-potential maps show how likely the strong ground shaking from the scenario earthquakes is to produce liquefaction. These maps complement the U.S. Geological Survey's National Seismic Hazard Maps but are more detailed regionally and take into account surficial geology, soil thickness, and soil stiffness; these elements greatly affect ground shaking.

  1. Earthquake forecasting studies using radon time series data in Taiwan

    NASA Astrophysics Data System (ADS)

    Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong

    2017-04-01

    For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.

  2. The 23 April 1909 Benavente earthquake (Portugal): macroseismic field revision

    NASA Astrophysics Data System (ADS)

    Teves-Costa, Paula; Batlló, Josep

    2011-01-01

    The 23 April 1909 earthquake, with epicentre near Benavente (Portugal), was the largest crustal earthquake in the Iberian Peninsula during the twentieth century ( M w = 6.0). Due to its importance, several studies were developed soon after its occurrence, in Portugal and in Spain. A perusal of the different studies on the macroseismic field of this earthquake showed some discrepancies, in particular on the abnormal patterns of the isoseismal curves in Spain. Besides, a complete list of intensity data points for the event is unavailable at present. Seismic moment, focal mechanism and other earthquake parameters obtained from the instrumental records have been recently reviewed and recalculated. Revision of the macroseismic field of this earthquake poses a unique opportunity to study macroseismic propagation and local effects in central Iberian Peninsula. For this reasons, a search to collect new macroseismic data for this earthquake has been carried out, and a re-evaluation of the whole set has been performed and it is presented here. Special attention is paid to the observed low attenuation of the macroseismic effects, heterogeneous propagation and the distortion introduced by local amplifications. Results of this study indicate, in general, an overestimation of the intensity degrees previously assigned to this earthquake in Spain; also it illustrates how difficult it is to assign an intensity degree to a large town, where local effects play an important role, and confirms the low attenuation of seismic propagation inside the Iberian Peninsula from west and southwest to east and northeast.

  3. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  4. Near Real-Time Earthquake Exposure and Damage Assessment: An Example from Turkey

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Çomoǧlu, Mustafa; Erdik, Mustafa

    2014-05-01

    Confined by infamous strike-slip North Anatolian Fault from the north and by the Hellenic subduction trench from the south Turkey is one of the most seismically active countries in Europe. Due this increased exposure and the fragility of the building stock Turkey is among the top countries exposed to earthquake hazard in terms of mortality and economic losses. In this study we focus recent and ongoing efforts to mitigate the earthquake risk in near real-time. We present actual results of recent earthquakes, such as the M6 event off-shore Antalya which occurred on 28 December 2013. Starting at the moment of detection, we obtain a preliminary ground motion intensity distribution based on epicenter and magnitude. Our real-time application is further enhanced by the integration of the SeisComp3 ground motion parameter estimation tool with the Earthquake Loss Estimation Routine (ELER). SeisComp3 provides the online station parameters which are then automatically incorporated into the ShakeMaps produced by ELER. The resulting ground motion distributions are used together with the building inventory to calculate expected number of buildings in various damage states. All these analysis are conducted in an automated fashion and are communicated within a few minutes of a triggering event. In our efforts to disseminate earthquake information to the general public we make extensive use of social networks such as Tweeter and collaborate with mobile phone operators.

  5. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  6. Latitude and pH driven trends in the molecular composition of DOM across a north south transect along the Yenisei River

    NASA Astrophysics Data System (ADS)

    Roth, Vanessa-Nina; Dittmar, Thorsten; Gaupp, Reinhard; Gleixner, Gerd

    2013-12-01

    We used electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry (ESI-FT-ICR-MS) to identify the molecular composition of dissolved organic matter (DOM) collected from different ecosystems along a transect crossing Siberia’s northern and middle Taiga. This information is urgently needed to help elucidate global carbon cycling and export through Russian rivers. In total, we analyzed DOM samples from eleven Yenisei tributaries and seven bogs. Freeze-dried and re-dissolved DOM was desalted via solid phase extraction (SPE) and eluted in methanol for ESI-FT-ICR-MS measurements. We recorded 15209 different masses and identified 7382 molecular formulae in the mass range between m/z = 150 and 800. We utilized the relative FT-ICR-MS signal intensities of 3384 molecular formulae above a conservatively set limit of detection and summarized the molecular characteristics for each measurement using ten magnitude-weighted parameters ((O/C)w, (H/C)w, (N/C)w, (DBE)w, (DBE/C)w, (DBE/O)w, (DBE-O)w, (C#)w, (MW)w and (AI)w) for redundancy analysis. Consequently, we revealed that the molecular composition of DOM depends mainly on pH and geographical latitude. After applying variation partitioning to the peak data, we isolated molecular formulae that were strongly positive or negatively correlated with latitude and pH. We used the chemical information from 13 parameters (C#, H#, N#, O#, O/C, H/C, DBE, DBE/C, DBE/O, AI, N/C, DBE-O and MW) to characterize the extracted molecular formulae. Using latitude along the gradient representing climatic variation, we found a higher abundance of smaller molecules, nitrogen-containing compounds and unsaturated Cdbnd C functionalities at higher latitudes. As possible reasons for the different molecular characteristics occurring along this gradient, we suggested that the decomposition was temperature dependent resulting to a higher abundance of non-degraded lignin-derived phenolic substances. We demonstrated that bog samples

  7. Study on moderate earthquake risk of the Xinfengjiang reservoir from 3D P-wave velocity structure and current seismicity parameters

    NASA Astrophysics Data System (ADS)

    Ye, Xiuwei; Wang, Xiaona; Huang, Yuanmin; Liu, Jiping; Tan, Zhengguang

    2017-06-01

    In this paper, we determined the Xingfengjiang Reservoir earthquake sequence location from June 2007 to July 2014 and 3D P-wave velocity structure by a simultaneous inversion method. On that basis, we mapped the b-value 3D distribution. The results show the low b-value distribution consists with the high velocity zone(HVZ) and most earthquakes occurred around the HVZ. Under the reservoir dam there is a strong tectonic deformation zone, as the centre exit Renzishi fault F2, Nanshan - Aotou faults F4, Heyuan fault F1 and Shijiao-xingang-baitian fault F5. M6.1 Xinfengjiang earthquake, 19 Mar 1962, occurred in the strong tectonic deformation zone, and now the zone b≥0.7, so a short period of the original earthquake occur more unlikely. The b-value of the HVZ under Xichang(in the northwest corner of XFJ Reservoir) ranges between 0.4 to 0.7 suggesting the rate of stress accumulations is greater than the speed of seismic energy release since 2012. We don’t exclude the possibility that the HVZ becomes the seismogenic asperity, and will occur M≥5 earthquake.

  8. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  9. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake

    USGS Publications Warehouse

    Hayes, Gavin P.; Herman, Matthew W.; Barnhart, William D.; Furlong, Kevin P.; Riquelme, Sebástian; Benz, Harley M.; Bergman, Eric; Barrientos, Sergio; Earle, Paul S.; Samsonov, Sergey

    2014-01-01

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile which had not ruptured in a megathrust earthquake since a M ~8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March–April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  10. Continuing megathrust earthquake potential in Chile after the 2014 Iquique earthquake.

    PubMed

    Hayes, Gavin P; Herman, Matthew W; Barnhart, William D; Furlong, Kevin P; Riquelme, Sebástian; Benz, Harley M; Bergman, Eric; Barrientos, Sergio; Earle, Paul S; Samsonov, Sergey

    2014-08-21

    The seismic gap theory identifies regions of elevated hazard based on a lack of recent seismicity in comparison with other portions of a fault. It has successfully explained past earthquakes (see, for example, ref. 2) and is useful for qualitatively describing where large earthquakes might occur. A large earthquake had been expected in the subduction zone adjacent to northern Chile, which had not ruptured in a megathrust earthquake since a M ∼8.8 event in 1877. On 1 April 2014 a M 8.2 earthquake occurred within this seismic gap. Here we present an assessment of the seismotectonics of the March-April 2014 Iquique sequence, including analyses of earthquake relocations, moment tensors, finite fault models, moment deficit calculations and cumulative Coulomb stress transfer. This ensemble of information allows us to place the sequence within the context of regional seismicity and to identify areas of remaining and/or elevated hazard. Our results constrain the size and spatial extent of rupture, and indicate that this was not the earthquake that had been anticipated. Significant sections of the northern Chile subduction zone have not ruptured in almost 150 years, so it is likely that future megathrust earthquakes will occur to the south and potentially to the north of the 2014 Iquique sequence.

  11. Temporal variation characteristics of shear-wave splitting for the Rushan earthquake swarm of Shandong Province

    NASA Astrophysics Data System (ADS)

    Miao, Qingjie; Liu, Xiqiang

    2017-03-01

    The seismicity in Rushan region of Shandong Province is characterized by small swarms after the ML3.8 Rushan earthquake on October 1, 2013, and this situation continues up to now. Four earthquakes with ML4.7, ML4.5, ML4.1 and ML5.0 occurred from January of 2014 to May of 2015 cause great social effects. Based on the seismic records from the Rushan station, this paper calculated the shear-wave splitting parameters of 224 small earthquakes of Rushan earthquake swarm. The result shows that the polarization direction of the fast shear-wave is consistent with the principal compressive stress direction of the Shandong peninsula; on the other hand, the time delay has obvious change before and after the four earthquakes, that is, it raised about one month and declined about twelve days before earthquake. All the characteristics can be taken as the precursor indicator for earthquake prediction based on stress.

  12. Earthquakes: Predicting the unpredictable?

    USGS Publications Warehouse

    Hough, Susan E.

    2005-01-01

    The earthquake prediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquake prediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

  13. A study of Guptkashi, Uttarakhand earthquake of 6 February 2017 ( M w 5.3) in the Himalayan arc and implications for ground motion estimation

    NASA Astrophysics Data System (ADS)

    Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati

    2018-05-01

    The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth ( H = 19 km), the seismic moment ( M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism ( φ = 280°, δ = 14°, λ = 84°), the source radius ( a = 1.3 km), and the static stress drop (Δ σ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q( f) = 500 f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δ σ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.

  14. A study of Guptkashi, Uttarakhand earthquake of 6 February 2017 (M w 5.3) in the Himalayan arc and implications for ground motion estimation

    NASA Astrophysics Data System (ADS)

    Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati

    2018-02-01

    The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth (H = 19 km), the seismic moment (M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism (φ = 280°, δ = 14°, λ = 84°), the source radius (a = 1.3 km), and the static stress drop (Δσ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q(f) = 500f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δσ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.

  15. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto

  16. Observing pre-earthquake features in the Earth atmosphere-ionosphere environment associated with 2017 Tehuantepec and Puebla earthquakes in Mexico

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Guiliani, G.; Hernandez-Pajares, M.; Garcia-Rigo, A.; Petrov, L.; Taylor, P. T.; Hatzopoulos, N.; Kafatos, M.

    2017-12-01

    We are presenting a multi parameter study of lithosphere/atmosphere /ionosphere transient phenomena observed in advance of the M8.2 Tehuantepec and M7.1Puebla earthquakes, the largest and most damaging earthquakes ever recorded in Mexico. We are collecting data from four instruments which recorded hourly and daily: 1.Ground Radon variations (Gamma network in Southern CA) ; 2. Outgoing long-wavelength radiation (OLR obtained from NPOES) on the top of the atmosphere (TOA), 3. Atmospheric chemical potential (ACP) obtained from NASA assimilation models and 4. Electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC). The September M8.2 earthquake was situated about 3200 kilometers south of two-radon monitoring stations in Orange, Southern California. Real time hourly data show a sharp increase on both sensors (160 kilometers apart) on Sept 2 ( 6 days prior to the M8.2 of 09.08.2017 ) and second anomaly appeared again on Sept 11 ( 7 days prior to the M7.1 of 09.19.2017). Those increases in radon coincide (with some delay) with an increase in the atmospheric chemical potential (on Sept. 03 and10 respectively) measured near the epicentral area from satellite data. And subsequently at the end of August there was an increase of infrared radiation observed which was associated with the acceleration of OLR at the TOA observed from NOAA polar orbit satellites reaching a maximum near the epicenter on Sept 5 and Sept 17. The GPS/Total Electron Content data indicated an increase of electron concentration in ionosphere on Sep 7 and Sep 18, 1-2 days before both earthquakes. Before the earthquake ground and satellite data both show a synergetic anomalous trend, a week before the M8.2 Tehuantepec of 09.08.2017 and continuously up to the Puebla earthquake(M7.1 of 09.19.2017) , although the radon variations were observed far from both epicentral areas. We examined the possible correlation between different pre-earthquake signals in the frame of a

  17. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  18. VS30 – A site-characterization parameter for use in building Codes, simplified earthquake resistant design, GMPEs, and ShakeMaps

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2012-01-01

    VS30, defined as the average seismic shear-wave velocity from the surface to a depth of 30 meters, has found wide-spread use as a parameter to characterize site response for simplified earthquake resistant design as implemented in building codes worldwide. VS30 , as initially introduced by the author for the US 1994 NEHRP Building Code, provides unambiguous definitions of site classes and site coefficients for site-dependent response spectra based on correlations derived from extensive borehole logging and comparative ground-motion measurement programs in California. Subsequent use of VS30 for development of strong ground motion prediction equations (GMPEs) and measurement of extensive sets of VS borehole data have confirmed the previous empirical correlations and established correlations of SVS30 with VSZ at other depths. These correlations provide closed form expressions to predict S30 V at a large number of additional sites and further justify S30 V as a parameter to characterize site response for simplified building codes, GMPEs, ShakeMap, and seismic hazard mapping.

  19. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  20. Composite Earthquake Catalog of the Yellow Sea for Seismic Hazard Studies

    NASA Astrophysics Data System (ADS)

    Kang, S. Y.; Kim, K. H.; LI, Z.; Hao, T.

    2017-12-01

    The Yellow Sea (a.k.a West Sea in Korea) is an epicontinental and semi-closed sea located between Korea and China. Recent earthquakes in the Yellow Sea including, but not limited to, the Seogyuckryulbi-do (1 April 2014, magnitude 5.1), Heuksan-do (21 April 2013, magnitude 4.9), Baekryung-do (18 May 2013, magnitude 4.9) earthquakes, and the earthquake swarm in the Boryung offshore region in 2013, remind us of the seismic hazards affecting east Asia. This series of earthquakes in the Yellow Sea raised numerous questions. Unfortunately, both governments have trouble in monitoring seismicity in the Yellow Sea because earthquakes occur beyond their seismic networks. For example, the epicenters of the magnitude 5.1 earthquake in the Seogyuckryulbi-do region in 2014 reported by the Korea Meteorological Administration and China Earthquake Administration differed by approximately 20 km. This illustrates the difficulty with seismic monitoring and locating earthquakes in the region, despite the huge effort made by both governments. Joint effort is required not only to overcome the limits posed by political boundaries and geographical location but also to study seismicity and the underground structures responsible. Although the well-established and developing seismic networks in Korea and China have provided unprecedented amount and quality of seismic data, high quality catalog is limited to the recent 10s of years, which is far from major earthquake cycle. It is also noticed the earthquake catalog from either country is biased to its own and cannot provide complete picture of seismicity in the Yellow Sea. In order to understand seismic hazard and tectonics in the Yellow Sea, a composite earthquake catalog has been developed. We gathered earthquake information during last 5,000 years from various sources. There are good reasons to believe that some listings account for same earthquake, but in different source parameters. We established criteria in order to provide consistent

  1. Stochastic ground-motion simulation of two Himalayan earthquakes: seismic hazard assessment perspective

    NASA Astrophysics Data System (ADS)

    Harbindu, Ashish; Sharma, Mukat Lal; Kamal

    2012-04-01

    The earthquakes in Uttarkashi (October 20, 1991, M w 6.8) and Chamoli (March 8, 1999, M w 6.4) are among the recent well-documented earthquakes that occurred in the Garhwal region of India and that caused extensive damage as well as loss of life. Using strong-motion data of these two earthquakes, we estimate their source, path, and site parameters. The quality factor ( Q β ) as a function of frequency is derived as Q β ( f) = 140 f 1.018. The site amplification functions are evaluated using the horizontal-to-vertical spectral ratio technique. The ground motions of the Uttarkashi and Chamoli earthquakes are simulated using the stochastic method of Boore (Bull Seismol Soc Am 73:1865-1894, 1983). The estimated source, path, and site parameters are used as input for the simulation. The simulated time histories are generated for a few stations and compared with the observed data. The simulated response spectra at 5% damping are in fair agreement with the observed response spectra for most of the stations over a wide range of frequencies. Residual trends closely match the observed and simulated response spectra. The synthetic data are in rough agreement with the ground-motion attenuation equation available for the Himalayas (Sharma, Bull Seismol Soc Am 98:1063-1069, 1998).

  2. Modeling of the strong ground motion of 25th April 2015 Nepal earthquake using modified semi-empirical technique

    NASA Astrophysics Data System (ADS)

    Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.

    2018-05-01

    On 25th April, 2015 a hazardous earthquake of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal earthquake which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the earthquake is about 420-515 km. Modified semi-empirical technique of modeling finite faults has been used in this paper to simulate strong earthquake at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune model in the present study which are used in the modeling of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune model .The simulated earthquake time series were compared with the observed records of the earthquake. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the earthquake was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.

  3. Earthquake-induced ground failures in Italy from a reviewed database

    NASA Astrophysics Data System (ADS)

    Martino, S.; Prestininzi, A.; Romeo, R. W.

    2013-05-01

    A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground-level changes triggered by earthquakes of Mercalli intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (URL: http://www.ceri.uniroma1.it/cn/index.do?id=230&page=55) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the "Sapienza" University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground-level changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.

  4. Earthquake-induced ground failures in Italy from a reviewed database

    NASA Astrophysics Data System (ADS)

    Martino, S.; Prestininzi, A.; Romeo, R. W.

    2014-04-01

    A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground changes triggered by earthquakes of Mercalli epicentral intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (http://www.ceri.uniroma1.it/cn/gis.jsp ) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the Sapienza University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.

  5. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  6. Landslides triggered by the January 12, 2010 Port-au-Prince, Haiti Mw 7.0 earthquake

    NASA Astrophysics Data System (ADS)

    Xu, Chong

    2014-05-01

    The January 12, 2010 Port-au-Prince, Haiti earthquake (Mw 7.0) triggered tens of thousands of landslides. The purpose of this study is to investigate correlations of the occurrence of landslides and its erosion thickness with topographic factors, seismic parameters, and distance from roads. A total of 30,828 landslides triggered by the earthquake cover a total area of 15.736 km2, and the volume of landslide accumulation materials is estimated to be about 30,000,000 m3, and throughout an area more than 3,000 km2. These landslides are of various types, mainly in shallow disrupted landslides and rock falls, and also including coherent deep-seated landslides, shallow disrupted landslides, rock falls, and rock slides. These landslides were delineated using pre- and post-earthquake high-resolutions satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were respectively constructed in order to more intuitive to discover the spatial distribution patterns of the co-seismic landslides. Statistics of size distribution and morphometric parameters of the co-seismic landslides were carried out and were compared with other earthquake events. Four proxies of co-seismic landslides abundances, including landslides centroid number density (LCND), landslide top number density (LTND), landslide area percentage (LAP), and landslide erosion thickness (LET) were used to correlate the co-seismic landslides with various landslide controlling parameters. These controlling parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, stratum/lithology, distance from the epicenter, distance from the Enriquillo-Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA). Comparing of controls of impact parameters on co-seismic landslides show that slope angle is the strongest impact parameter on co-seismic landslides

  7. Injection-induced earthquakes

    USGS Publications Warehouse

    Ellsworth, William L.

    2013-01-01

    Earthquakes in unusual locations have become an important topic of discussion in both North America and Europe, owing to the concern that industrial activity could cause damaging earthquakes. It has long been understood that earthquakes can be induced by impoundment of reservoirs, surface and underground mining, withdrawal of fluids and gas from the subsurface, and injection of fluids into underground formations. Injection-induced earthquakes have, in particular, become a focus of discussion as the application of hydraulic fracturing to tight shale formations is enabling the production of oil and gas from previously unproductive formations. Earthquakes can be induced as part of the process to stimulate the production from tight shale formations, or by disposal of wastewater associated with stimulation and production. Here, I review recent seismic activity that may be associated with industrial activity, with a focus on the disposal of wastewater by injection in deep wells; assess the scientific understanding of induced earthquakes; and discuss the key scientific challenges to be met for assessing this hazard.

  8. What is the earthquake fracture energy?

    NASA Astrophysics Data System (ADS)

    Di Toro, G.; Nielsen, S. B.; Passelegue, F. X.; Spagnuolo, E.; Bistacchi, A.; Fondriest, M.; Murphy, S.; Aretusini, S.; Demurtas, M.

    2016-12-01

    The energy budget of an earthquake is one of the main open questions in earthquake physics. During seismic rupture propagation, the elastic strain energy stored in the rock volume that bounds the fault is converted into (1) gravitational work (relative movement of the wall rocks bounding the fault), (2) in- and off-fault damage of the fault zone rocks (due to rupture propagation and frictional sliding), (3) frictional heating and, of course, (4) seismic radiated energy. The difficulty in the budget determination arises from the measurement of some parameters (e.g., the temperature increase in the slipping zone which constraints the frictional heat), from the not well constrained size of the energy sinks (e.g., how large is the rock volume involved in off-fault damage?) and from the continuous exchange of energy from different sinks (for instance, fragmentation and grain size reduction may result from both the passage of the rupture front and frictional heating). Field geology studies, microstructural investigations, experiments and modelling may yield some hints. Here we discuss (1) the discrepancies arising from the comparison of the fracture energy measured in experiments reproducing seismic slip with the one estimated from seismic inversion for natural earthquakes and (2) the off-fault damage induced by the diffusion of frictional heat during simulated seismic slip in the laboratory. Our analysis suggests, for instance, that the so called earthquake fracture energy (1) is mainly frictional heat for small slips and (2), with increasing slip, is controlled by the geometrical complexity and other plastic processes occurring in the damage zone. As a consequence, because faults are rapidly and efficiently lubricated upon fast slip initiation, the dominant dissipation mechanism in large earthquakes may not be friction but be the off-fault damage due to fault segmentation and stress concentrations in a growing region around the fracture tip.

  9. Earthquakes; January-February 1982

    USGS Publications Warehouse

    Person, W.J.

    1982-01-01

    In the United States, a number of earthquakes occurred, but only minor damage was reported. Arkansas experienced a swarm of earthquakes beginning on January 12. Canada experienced one of its strongest earthquakes in a number of years on January 9; this earthquake caused slight damage in Maine. 

  10. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  11. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  12. Induced earthquake during the 2016 Kumamoto earthquake (Mw7.0): Importance of real-time shake monitoring for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Hoshiba, M.; Ogiso, M.

    2016-12-01

    Sequence of the 2016 Kumamoto earthquakes (Mw6.2 on April 14, Mw7.0 on April 16, and many aftershocks) caused a devastating damage at Kumamoto and Oita prefectures, Japan. During the Mw7.0 event, just after the direct S waves passing the central Oita, another M6 class event occurred there more than 80 km apart from the Mw7.0 event. The M6 event is interpreted as an induced earthquake; but it brought stronger shaking at the central Oita than that from the Mw7.0 event. We will discuss the induced earthquake from viewpoint of Earthquake Early Warning. In terms of ground shaking such as PGA and PGV, the Mw7.0 event is much smaller than those of the M6 induced earthquake at the central Oita (for example, 1/8 smaller at OIT009 station for PGA), and then it is easy to discriminate two events. However, PGD of the Mw7.0 is larger than that of the induced earthquake, and its appearance is just before the occurrence of the induced earthquake. It is quite difficult to recognize the induced earthquake from displacement waveforms only, because the displacement is strongly contaminated by that of the preceding Mw7.0 event. In many methods of EEW (including current JMA EEW system), magnitude is used for prediction of ground shaking through Ground Motion Prediction Equation (GMPE) and the magnitude is often estimated from displacement. However, displacement magnitude does not necessarily mean the best one for prediction of ground shaking, such as PGA and PGV. In case of the induced earthquake during the Kumamoto earthquake, displacement magnitude could not be estimated because of the strong contamination. Actually JMA EEW system could not recognize the induced earthquake. One of the important lessons we learned from eight years' operation of EEW is an issue of the multiple simultaneous earthquakes, such as aftershocks of the 2011 Mw9.0 Tohoku earthquake. Based on this lesson, we have proposed enhancement of real-time monitor of ground shaking itself instead of rapid estimation of

  13. Wavelet maxima curves of surface latent heat flux associated with two recent Greek earthquakes

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Kafatos, M.; Napoletani, D.; Singh, R. P.

    2004-05-01

    Multi sensor data available through remote sensing satellites provide information about changes in the state of the oceans, land and atmosphere. Recent studies have shown anomalous changes in oceans, land, atmospheric and ionospheric parameters prior to earthquakes events. This paper introduces an innovative data mining technique to identify precursory signals associated with earthquakes. The proposed methodology is a multi strategy approach which employs one dimensional wavelet transformations to identify singularities in the data, and an analysis of the continuity of the wavelet maxima in time and space to identify the singularities associated with earthquakes. The proposed methodology has been employed using Surface Latent Heat Flux (SLHF) data to study the earthquakes which occurred on 14 August 2003 and on 1 March 2004 in Greece. A single prominent SLHF anomaly has been found about two weeks prior to each of the earthquakes.

  14. Earthquake hazard analysis for the different regions in and around Ağrı

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg–Richter magnitude–frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency–magnitude Gutenberg–Richter relationship, from the maximum likelihoodmore » method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.« less

  15. Earthquake hazard analysis for the different regions in and around Aǧrı

    NASA Astrophysics Data System (ADS)

    Bayrak, Erdem; Yilmaz, Şeyda; Bayrak, Yusuf

    2016-04-01

    We investigated earthquake hazard parameters for Eastern part of Turkey by determining the a and b parameters in a Gutenberg-Richter magnitude-frequency relationship. For this purpose, study area is divided into seven different source zones based on their tectonic and seismotectonic regimes. The database used in this work was taken from different sources and catalogues such as TURKNET, International Seismological Centre (ISC), Incorporated Research Institutions for Seismology (IRIS) and The Scientific and Technological Research Council of Turkey (TUBITAK) for instrumental period. We calculated the a value, b value, which is the slope of the frequency-magnitude Gutenberg-Richter relationship, from the maximum likelihood method (ML). Also, we estimated the mean return periods, the most probable maximum magnitude in the time period of t-years and the probability for an earthquake occurrence for an earthquake magnitude ≥ M during a time span of t-years. We used Zmap software to calculate these parameters. The lowest b value was calculated in Region 1 covered Cobandede Fault Zone. We obtain the highest a value in Region 2 covered Kagizman Fault Zone. This conclusion is strongly supported from the probability value, which shows the largest value (87%) for an earthquake with magnitude greater than or equal to 6.0. The mean return period for such a magnitude is the lowest in this region (49-years). The most probable magnitude in the next 100 years was calculated and we determined the highest value around Cobandede Fault Zone. According to these parameters, Region 1 covered the Cobandede Fault Zone and is the most dangerous area around the Eastern part of Turkey.

  16. Earthquakes, September-October 1986

    USGS Publications Warehouse

    Person, W.J.

    1987-01-01

    There was one great earthquake (8.0 and above) during this reporting period in the South Pacific in the Kermadec Islands. There were no major earthquakes (7.0-7.9) but earthquake-related deaths were reported in Greece and in El Salvador. There were no destrcutive earthquakes in the United States.

  17. Performance of Real-time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, H.; Horiuchi, S.; Wu, C.; Yamamoto, S.

    2008-12-01

    Horiuchi et al. (2005) developed a real-time earthquake information system (REIS) using Hi-net, a densely deployed nationwide seismic network, which consists of about 800 stations operated by NIED, Japan. REIS determines hypocenter locations and earthquake magnitudes automatically within a few seconds after P waves arrive at the closest station and calculates focal mechanisms within about 15 seconds. Obtained hypocenter parameters are transferred immediately by using XML format to a computer in Japan Meteorological Agency (JMA), who started the service of EEW to special users in June 2005. JMA also developed EEW using 200 stations. The results by the two systems are merged. Among all the first issued EEW reports by both systems, REIS information accounts for about 80 percent. This study examines the rapidity and credibility of REIS by analyzing the 4050 earthquakes which occurred around the Japan Islands since 2005 with magnitude larger than 3.0. REIS re-determines hypocenter parameters every one second according to the revision of waveform data. Here, we discuss only about the results by the first reports. On rapidness, our results show that about 44 percent of the first reports are issued within 5 seconds after the P waves arrives at the closest stations. Note that this 5-second time window includes time delay due to data package and transmission delay of about 2 seconds. REIS waits till two stations detect P waves for events in the network but four stations outside the network so as to get reliable solutions. For earthquakes with hypocentral distance less than 100km, 55 percent of earthquakes are warned in 5 seconds and 87 percent are warned in 10 seconds. Most of events having long time delay are small and triggered by S wave arrivals. About 80 percent of events have difference in epicenter distances less than 20km relative to JMA manually determined locations. Because of the existence of large lateral heterogeneity in seismic velocity, the difference depends

  18. Post earthquake recovery in natural gas systems--1971 San Fernando Earthquake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, W.T. Jr.

    1983-01-01

    In this paper a concise summary of the post earthquake investigations for the 1971 San Fernando Earthquake is presented. The effects of the earthquake upon building and other above ground structures are briefly discussed. Then the damages and subsequent repairs in the natural gas systems are reported.

  19. Seismic hazard along a crude oil pipeline in the event of an 1811-1812 type New Madrid earthquake. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, H.H.M.; Chen, C.H.S.

    1990-04-16

    An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less

  20. Earthquakes; July-August, 1978

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    Earthquake activity during this period was about normal. Deaths from earthquakes were reported from Greece and Guatemala. Three major earthquakes (magnitude 7.0-7.9) occurred in Taiwan, Chile, and Costa Rica. In the United States, the most significant earthquake was a magnitude 5.6 on August 13 in southern California. 

  1. Analysis of the enhanced negative correlation between electron density and electron temperature related to earthquakes

    NASA Astrophysics Data System (ADS)

    Shen, X. H.; Zhang, X.; Liu, J.; Zhao, S. F.; Yuan, G. P.

    2015-04-01

    Ionospheric perturbations in plasma parameters have been observed before large earthquakes, but the correlation between different parameters has been less studied in previous research. The present study is focused on the relationship between electron density (Ne) and temperature (Te) observed by the DEMETER (Detection of Electro-Magnetic Emissions Transmitted from Earthquake Regions) satellite during local nighttime, in which a positive correlation has been revealed near the equator and a weak correlation at mid- and low latitudes over both hemispheres. Based on this normal background analysis, the negative correlation with the lowest percent in all Ne and Te points is studied before and after large earthquakes at mid- and low latitudes. The multiparameter observations exhibited typical synchronous disturbances before the Chile M8.8 earthquake in 2010 and the Pu'er M6.4 in 2007, and Te varied inversely with Ne over the epicentral areas. Moreover, statistical analysis has been done by selecting the orbits at a distance of 1000 km and ±7 days before and after the global earthquakes. Enhanced negative correlation coefficients lower than -0.5 between Ne and Te are found in 42% of points to be connected with earthquakes. The correlation median values at different seismic levels show a clear decrease with earthquakes larger than 7. Finally, the electric-field-coupling model is discussed; furthermore, a digital simulation has been carried out by SAMI2 (Sami2 is Another Model of the Ionosphere), which illustrates that the external electric field in the ionosphere can strengthen the negative correlation in Ne and Te at a lower latitude relative to the disturbed source due to the effects of the geomagnetic field. Although seismic activity is not the only source to cause the inverse Ne-Te variations, the present results demonstrate one possibly useful tool in seismo-electromagnetic anomaly differentiation, and a comprehensive analysis with multiple parameters helps to

  2. Biological Indicators in Studies of Earthquake Precursors

    NASA Astrophysics Data System (ADS)

    Sidorin, A. Ya.; Deshcherevskii, A. V.

    2012-04-01

    Time series of data on variations in the electric activity (EA) of four species of weakly electric fish Gnathonemus leopoldianus and moving activity (MA) of two cat-fishes Hoplosternum thoracatum and two groups of Columbian cockroaches Blaberus craniifer were analyzed. The observations were carried out in the Garm region of Tajikistan within the frameworks of the experiments aimed at searching for earthquake precursors. An automatic recording system continuously recorded EA and DA over a period of several years. Hourly means EA and MA values were processed. Approximately 100 different parameters were calculated on the basis of six initial EA and MA time series, which characterize different variations in the EA and DA structure: amplitude of the signal and fluctuations of activity, parameters of diurnal rhythms, correlated changes in the activity of various biological indicators, and others. A detailed analysis of the statistical structure of the total array of parametric time series obtained in the experiment showed that the behavior of all animals shows a strong temporal variability. All calculated parameters are unstable and subject to frequent changes. A comparison of the data obtained with seismicity allow us to make the following conclusions: (1) The structure of variations in the studied parameters is represented by flicker noise or even a more complex process with permanent changes in its characteristics. Significant statistics are required to prove the cause-and-effect relationship of the specific features of such time series with seismicity. (2) The calculation of the reconstruction statistics in the EA and MA series structure demonstrated an increase in their frequency in the last hours or a few days before the earthquake if the hypocenter distance is comparable to the source size. Sufficiently dramatic anomalies in the behavior of catfishes and cockroaches (changes in the amplitude of activity variation, distortions of diurnal rhythms, increase in the

  3. Improvements of the offshore earthquake locations in the Earthquake Early Warning System

    NASA Astrophysics Data System (ADS)

    Chen, Ta-Yi; Hsu, Hsin-Chih

    2017-04-01

    Since 2014 the Earthworm Based Earthquake Alarm Reporting (eBEAR) system has been operated and been used to issue warnings to schools. In 2015 the system started to provide warnings to the public in Taiwan via television and the cell phone. Online performance of the eBEAR system indicated that the average reporting times afforded by the system are approximately 15 and 28 s for inland and offshore earthquakes, respectively. The eBEAR system in average can provide more warning time than the current EEW system (3.2 s and 5.5 s for inland and offshore earthquakes, respectively). However, offshore earthquakes were usually located poorly because only P-wave arrivals were used in the eBEAR system. Additionally, in the early stage of the earthquake early warning system, only fewer stations are available. The poor station coverage may be a reason to answer why offshore earthquakes are difficult to locate accurately. In the Geiger's inversion procedure of earthquake location, we need to put an initial hypocenter and origin time into the location program. For the initial hypocenter, we defined some test locations on the offshore area instead of using the average of locations from triggered stations. We performed 20 programs concurrently running the Geiger's method with different pre-defined initial position to locate earthquakes. We assume that if the program with the pre-defined initial position is close to the true earthquake location, during the iteration procedure of the Geiger's method the processing time of this program should be less than others. The results show that using pre-defined locations for trial-hypocenter in the inversion procedure is able to improve the accurate of offshore earthquakes. Especially for EEW system, in the initial stage of the EEW system, only use 3 or 5 stations to locate earthquakes may lead to bad results because of poor station coverage. In this study, the pre-defined trial-locations provide a feasible way to improve the estimations of

  4. Megathrust earthquakes in Central Chile: What is next after the Maule 2010 earthquake?

    NASA Astrophysics Data System (ADS)

    Madariaga, R.

    2013-05-01

    The 27 February 2010 Maule earthquake occurred in a well identified gap in the Chilean subduction zone. The event has now been studied in detail using both far-field, near field seismic and geodetic data, we will review this information gathered so far. The event broke a region that was much longer along strike than the gap left over from the 1835 Concepcion earthquake, sometimes called the Darwin earthquake because he was in the area when the earthquake occurred and made many observations. Recent studies of contemporary documents by Udias et al indicate that the area broken by the Maule earthquake in 2010 had previously broken by a similar earthquake in 1751, but several events in the magnitude 8 range occurred in the area principally in 1835 already mentioned and, more recently on 1 December 1928 to the North and on 21 May 1960 (1 1/2 days before the big Chilean earthquake of 1960). Currently the area of the 2010 earthquake and the region immediately to the North is undergoing a very large increase in seismicity with numerous clusters of seismicity that move along the plate interface. Examination of the seismicity of Chile of the 18th and 19th century show that the region immediately to the North of the 2010 earthquake broke in a very large megathrust event in July 1730. this is the largest known earthquake in central Chile. The region where this event occurred has broken in many occasions with M 8 range earthquakes in 1822, 1880, 1906, 1971 and 1985. Is it preparing for a new very large megathrust event? The 1906 earthquake of Mw 8.3 filled the central part of the gap but it has broken again on several occasions in 1971, 1973 and 1985. The main question is whether the 1906 earthquake relieved enough stresses from the 1730 rupture zone. Geodetic data shows that most of the region that broke in 1730 is currently almost fully locked from the northern end of the Maule earthquake at 34.5°S to 30°S, near the southern end of the of the Mw 8.5 Atacama earthquake of 11

  5. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  6. Multi-instrument observations of pre-earthquake transient signatures associated with 2015 M8.3 Chile earthquake

    NASA Astrophysics Data System (ADS)

    Ouzounov, D.; Pulinets, S. A.; Hernandez-Pajares, M.; Garcia-Rigo, A.; De Santis, A.; Pavón, J.; Liu, J. Y. G.; Chen, C. H.; Cheng, K. C.; Hattori, K.; Stepanova, M. V.; Romanova, N.; Hatzopoulos, N.; Kafatos, M.

    2016-12-01

    We are conducting multi parameter validation study on lithosphere/atmosphere /ionosphere transient phenomena preceding major earthquakes particularly for the case of M8.3 of Sept 16th, 2015 in Chile. Our approach is based on monitoring simultaneously a series of different physical parameters from space: 1/Outgoing long-wavelength radiation (OLR obtained from NOAA/AVHRR); 2/ electron and electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC), and 3/geomagnetic field and plasma density variation (Swarm); and from ground: 3/ GPS crustal deformation and 4/ground-based magnetometers. The time and location of main shock was prospectively alerted in advance using the Multi Sensor Networking Approach (MSNA-LAIC) approach. We analyzed retrospectively several physical observations characterizing the state of the lithosphere, atmosphere and ionosphere several days before, during and after the M8.3 earthquakes in Illapel. Our continuous satellite monitoring of long-wave (LW) data over Chile, shows a rapid increase of emitted radiation during the end of August 2015 and an anomaly in the atmosphere was detected at 19 LT on Sept 1st, 2015, over the water near to the epicenter. On Sept 2nd Swarm magnetic measurements show an anomalous signature over the epicentral region. GPS/TEC analysis revealed an anomaly on Sept 14th and on the same day the degradation of Equatorial Ionospheric Anomaly (EIA) and disappearance of the crests of EIA as is characteristic for pre-dawn and early morning hours (11 LT) was observed. On Sept 16th co-seismic ionospheric signatures consistent with defined circular acoustic-gravity wave and different shock-acoustic waves were also observed. GPS TEC and deformation studies were computed from 48 GPS stations (2013-2015) of National Seismological Center of Chile (CSN) GPS network. A transient signal of deformation has been observed a week in advance correlated with ground-based magnetometers ULF signal fluctuation from closest

  7. A test to evaluate the earthquake prediction algorithm, M8

    USGS Publications Warehouse

    Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.

    1992-01-01

    A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction:  1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by  investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

  8. Pre-earthquake signatures in atmosphere/ionosphere and their potential for short-term earthquake forecasting. Case studies for 2015

    NASA Astrophysics Data System (ADS)

    Ouzounov, Dimitar; Pulinets, Sergey; Davidenko, Dmitry; Hernández-Pajares, Manuel; García-Rigo, Alberto; Petrrov, Leonid; Hatzopoulos, Nikolaos; Kafatos, Menas

    2016-04-01

    We are conducting validation studies on temporal-spatial pattern of pre-earthquake signatures in atmosphere and ionosphere associated with M>7 earthquakes in 2015. Our approach is based on the Lithosphere Atmosphere Ionosphere Coupling (LAIC) physical concept integrated with Multi-sensor-networking analysis (MSNA) of several non-correlated observations that can potentially yield predictive information. In this study we present two type of results: 1/ prospective testing of MSNA-LAIC for M7+ in 2015 and 2:/ retrospective analysis of temporal-spatial variations in atmosphere and ionosphere several days before the two M7.8 and M7.3 in Nepal and M8.3 Chile earthquakes. During the prospective test 18 earthquakes M>7 occurred worldwide, from which 15 were alerted in advance with the time lag between 2 up to 30 days and with different level of accuracy. The retrospective analysis included different physical parameters from space: Outgoing long-wavelength radiation (OLR obtained from NPOES, NASA/AQUA) on the top of the atmosphere, Atmospheric potential (ACP obtained from NASA assimilation models) and electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC). Concerning M7.8 in Nepal of April 24, rapid increase of OLR reached the maximum on April 21-22. GPS/TEC data indicate maximum value during April 22-24 periods. Strong negative TEC anomaly was detected in the crest of EIA (Equatorial Ionospheric Anomaly) on April 21st and strong positive on April 24th, 2015. For May 12 M7.3 aftershock similar pre- earthquake patterns in OLR and GPS/TEC were observed. Concerning the M8.3 Chile of Sept 16, the OLR strongest transient feature was observed of Sept 12. GPS/TEC analysis data confirm abnormal values on Sept 14. Also on the same day the degradation of EIA and disappearance of the crests of EIA as is characteristic for pre-dawn and early morning hours (11 LT) was observed. On Sept 16 co-seismic ionospheric signatures consistent with defined circular

  9. Preliminary Estimation of Kappa Parameter in Croatia

    NASA Astrophysics Data System (ADS)

    Stanko, Davor; Markušić, Snježana; Ivančić, Ines; Mario, Gazdek; Gülerce, Zeynep

    2017-12-01

    Spectral parameter kappa κ is used to describe spectral amplitude decay “crash syndrome” at high frequencies. The purpose of this research is to estimate spectral parameter kappa for the first time in Croatia based on small and moderate earthquakes. Recordings of local earthquakes with magnitudes higher than 3, epicentre distances less than 150 km, and focal depths less than 30 km from seismological stations in Croatia are used. The value of kappa was estimated from the acceleration amplitude spectrum of shear waves from the slope of the high-frequency part where the spectrum starts to decay rapidly to a noise floor. Kappa models as a function of a site and distance were derived from a standard linear regression of kappa-distance dependence. Site kappa was determined from the extrapolation of the regression line to a zero distance. The preliminary results of site kappa across Croatia are promising. In this research, these results are compared with local site condition parameters for each station, e.g. shear wave velocity in the upper 30 m from geophysical measurements and with existing global shear wave velocity - site kappa values. Spatial distribution of individual kappa’s is compared with the azimuthal distribution of earthquake epicentres. These results are significant for a couple of reasons: to extend the knowledge of the attenuation of near-surface crust layers of the Dinarides and to provide additional information on the local earthquake parameters for updating seismic hazard maps of studied area. Site kappa can be used in the re-creation, and re-calibration of attenuation of peak horizontal and/or vertical acceleration in the Dinarides area since information on the local site conditions were not included in the previous studies.

  10. Earthquakes; March-April 1975

    USGS Publications Warehouse

    Person, W.J.

    1975-01-01

    There were no major earthquakes (magnitude 7.0-7.9) in March or April; however, there were earthquake fatalities in Chile, Iran, and Venezuela and approximately 35 earthquake-related injuries were reported around the world. In the United States a magnitude 6.0 earthquake struck the Idaho-Utah border region. Damage was estimated at about a million dollars. The shock was felt over a wide area and was the largest to hit the continental Untied States since the San Fernando earthquake of February 1971. 

  11. Validation of Atmosphere/Ionosphere Signals Associated with Major Earthquakes by Multi-Instrument Space-Borne and Ground Observations

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Parrot, Michel; Liu, J. Y.; Yang, T. F.; Arellano-Baeza, Alonso; Kafatos, M.; Taylor, Patrick

    2012-01-01

    The latest catastrophic earthquake in Japan (March 2011) has renewed interest in the important question of the existence of pre-earthquake anomalous signals related to strong earthquakes. Recent studies have shown that there were precursory atmospheric/ionospheric signals observed in space associated with major earthquakes. The critical question, still widely debated in the scientific community, is whether such ionospheric/atmospheric signals systematically precede large earthquakes. To address this problem we have started to investigate anomalous ionospheric / atmospheric signals occurring prior to large earthquakes. We are studying the Earth's atmospheric electromagnetic environment by developing a multisensor model for monitoring the signals related to active tectonic faulting and earthquake processes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (thermal infrared radiation, electron concentration in the ionosphere, lineament analysis, radon/ion activities, air temperature and seismicity) that were found to be associated with earthquakes. A physical link between these parameters and earthquake processes has been provided by the recent version of Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model. Our experimental measurements have supported the new theoretical estimates of LAIC hypothesis for an increase in the surface latent heat flux, integrated variability of outgoing long wave radiation (OLR) and anomalous variations of the total electron content (TEC) registered over the epicenters. Some of the major earthquakes are accompanied by an intensification of gas migration to the surface, thermodynamic and hydrodynamic processes of transformation of latent heat into thermal energy and with vertical transport of charged aerosols in the lower atmosphere. These processes lead to the generation of external electric currents in specific

  12. Protecting your family from earthquakes: The seven steps to earthquake safety

    USGS Publications Warehouse

    Developed by American Red Cross, Asian Pacific Fund

    2007-01-01

    This book is provided here because of the importance of preparing for earthquakes before they happen. Experts say it is very likely there will be a damaging San Francisco Bay Area earthquake in the next 30 years and that it will strike without warning. It may be hard to find the supplies and services we need after this earthquake. For example, hospitals may have more patients than they can treat, and grocery stores may be closed for weeks. You will need to provide for your family until help arrives. To keep our loved ones and our community safe, we must prepare now. Some of us come from places where earthquakes are also common. However, the dangers of earthquakes in our homelands may be very different than in the Bay Area. For example, many people in Asian countries die in major earthquakes when buildings collapse or from big sea waves called tsunami. In the Bay Area, the main danger is from objects inside buildings falling on people. Take action now to make sure your family will be safe in an earthquake. The first step is to read this book carefully and follow its advice. By making your home safer, you help make our community safer. Preparing for earthquakes is important, and together we can make sure our families and community are ready. English version p. 3-13 Chinese version p. 14-24 Vietnamese version p. 25-36 Korean version p. 37-48

  13. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  14. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  15. Earthquake catalog for estimation of maximum earthquake magnitude, Central and Eastern United States: Part A, Prehistoric earthquakes

    USGS Publications Warehouse

    Wheeler, Russell L.

    2014-01-01

    Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.

  16. Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey

    USGS Publications Warehouse

    Parsons, T.

    2004-01-01

    New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.

  17. Earthquake likelihood model testing

    USGS Publications Warehouse

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    wide range of possible testing procedures exist. Jolliffe and Stephenson (2003) present different forecast verifications from atmospheric science, among them likelihood testing of probability forecasts and testing the occurrence of binary events. Testing binary events requires that for each forecasted event, the spatial, temporal and magnitude limits be given. Although major earthquakes can be considered binary events, the models within the RELM project express their forecasts on a spatial grid and in 0.1 magnitude units; thus the results are a distribution of rates over space and magnitude. These forecasts can be tested with likelihood tests.In general, likelihood tests assume a valid null hypothesis against which a given hypothesis is tested. The outcome is either a rejection of the null hypothesis in favor of the test hypothesis or a nonrejection, meaning the test hypothesis cannot outperform the null hypothesis at a given significance level. Within RELM, there is no accepted null hypothesis and thus the likelihood test needs to be expanded to allow comparable testing of equipollent hypotheses.To test models against one another, we require that forecasts are expressed in a standard format: the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, depth, magnitude, time period, and focal mechanisms. Focal mechanisms should either be described as the inclination of P-axis, declination of P-axis, and inclination of the T-axis, or as strike, dip, and rake angles. Schorlemmer and Gerstenberger (2007, this issue) designed classes of these parameters such that similar models will be tested against each other. These classes make the forecasts comparable between models. Additionally, we are limited to testing only what is precisely defined and consistently reported in earthquake catalogs. Therefore it is currently not possible to test such information as fault rupture length or area, asperity location, etc. Also, to account

  18. The initial subevent of the 1994 Northridge, California, earthquake: Is earthquake size predictable?

    USGS Publications Warehouse

    Kilb, Debi; Gomberg, J.

    1999-01-01

    We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

  19. Systematic detection and classification of earthquake clusters in Italy

    NASA Astrophysics Data System (ADS)

    Poli, P.; Ben-Zion, Y.; Zaliapin, I. V.

    2017-12-01

    We perform a systematic analysis of spatio-temporal clustering of 2007-2017 earthquakes in Italy with magnitudes m>3. The study employs the nearest-neighbor approach of Zaliapin and Ben-Zion [2013a, 2013b] with basic data-driven parameters. The results indicate that seismicity in Italy (an extensional tectonic regime) is dominated by clustered events, with smaller proportion of background events than in California. Evaluation of internal cluster properties allows separation of swarm-like from burst-like seismicity. This classification highlights a strong geographical coherence of cluster properties. Swarm-like seismicity are dominant in regions characterized by relatively slow deformation with possible elevated temperature and/or fluids (e.g. Alto Tiberina, Pollino), while burst-like seismicity are observed in crystalline tectonic regions (Alps and Calabrian Arc) and in Central Italy where moderate to large earthquakes are frequent (e.g. L'Aquila, Amatrice). To better assess the variation of seismicity style across Italy, we also perform a clustering analysis with region-specific parameters. This analysis highlights clear spatial changes of the threshold separating background and clustered seismicity, and permits better resolution of different clusters in specific geological regions. For example, a large proportion of repeaters is found in the Etna region as expected for volcanic-induced seismicity. A similar behavior is observed in the northern Apennines with high pore pressure associated with mantle degassing. The observed variations of earthquakes properties highlight shortcomings of practices using large-scale average seismic properties, and points to connections between seismicity and local properties of the lithosphere. The observations help to improve the understanding of the physics governing the occurrence of earthquakes in different regions.

  20. Improving the RST Approach for Earthquake Prone Areas Monitoring: Results of Correlation Analysis among Significant Sequences of TIR Anomalies and Earthquakes (M>4) occurred in Italy during 2004-2014

    NASA Astrophysics Data System (ADS)

    Tramutoli, V.; Coviello, I.; Filizzola, C.; Genzano, N.; Lisi, M.; Paciello, R.; Pergola, N.

    2015-12-01

    Looking toward the assessment of a multi-parametric system for dynamically updating seismic hazard estimates and earthquake short term (from days to weeks) forecast, a preliminary step is to identify those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated to the complex process of preparation of a big earthquake. Among the different parameters, the fluctuations of Earth's thermally emitted radiation, as measured by sensors on board of satellite system operating in the Thermal Infra-Red (TIR) spectral range, have been proposed since long time as potential earthquake precursors. Since 2001, a general approach called Robust Satellite Techniques (RST) has been used to discriminate anomalous thermal signals, possibly associated to seismic activity from normal fluctuations of Earth's thermal emission related to other causes (e.g. meteorological) independent on the earthquake occurrence. Thanks to its full exportability on different satellite packages, RST has been implemented on TIR images acquired by polar (e.g. NOAA-AVHRR, EOS-MODIS) and geostationary (e.g. MSG-SEVIRI, NOAA-GOES/W, GMS-5/VISSR) satellite sensors, in order to verify the presence (or absence) of TIR anomalies in presence (absence) of earthquakes (with M>4) in different seismogenic areas around the world (e.g. Italy, Turkey, Greece, California, Taiwan, etc.).In this paper, a refined RST (Robust Satellite Techniques) data analysis approach and RETIRA (Robust Estimator of TIR Anomalies) index were used to identify Significant Sequences of TIR Anomalies (SSTAs) during eleven years (from May 2004 to December 2014) of TIR satellite records, collected over Italy by the geostationary satellite sensor MSG-SEVIRI. On the basis of specific validation rules (mainly based on physical models and results obtained by applying RST approach to several earthquakes all around the world) the level of space-time correlation among SSTAs and earthquakes (with M≥4

  1. Tweeting Earthquakes using TensorFlow

    NASA Astrophysics Data System (ADS)

    Casarotti, E.; Comunello, F.; Magnoni, F.

    2016-12-01

    The use of social media is emerging as a powerful tool for disseminating trusted information about earthquakes. Since 2009, the Twitter account @INGVterremoti provides constant and timely details about M2+ seismic events detected by the Italian National Seismic Network, directly connected with the seismologists on duty at Istituto Nazionale di Geofisica e Vulcanologia (INGV). Currently, it updates more than 150,000 followers. Nevertheless, since it provides only the manual revision of seismic parameters, the timing (approximately between 10 and 20 minutes after an event) has started to be under evaluation. Undeniably, mobile internet, social network sites and Twitter in particular require a more rapid and "real-time" reaction. During the last 36 months, INGV tested the tweeting of the automatic detection of M3+ earthquakes, studying the reliability of the information both in term of seismological accuracy that from the point of view of communication and social research. A set of quality parameters (i.e. number of seismic stations, gap, relative error of the location) has been recognized to reduce false alarms and the uncertainty of the automatic detection. We present an experiment to further improve the reliability of this process using TensorFlow™ (an open source software library originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization).

  2. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica

  3. Identification of Deep Earthquakes

    DTIC Science & Technology

    2010-09-01

    discriminants that will reliably separate small, crustal earthquakes (magnitudes less than about 4 and depths less than about 40 to 50 km) from small...characteristics on discrimination plots designed to separate nuclear explosions from crustal earthquakes. Thus, reliably flagging these small, deep events is...Further, reliably identifying subcrustal earthquakes will allow us to eliminate deep events (previously misidentified as crustal earthquakes) from

  4. Some facts about aftershocks to large earthquakes in California

    USGS Publications Warehouse

    Jones, Lucile M.; Reasenberg, Paul A.

    1996-01-01

    Earthquakes occur in clusters. After one earthquake happens, we usually see others at nearby (or identical) locations. To talk about this phenomenon, seismologists coined three terms foreshock , mainshock , and aftershock. In any cluster of earthquakes, the one with the largest magnitude is called the mainshock; earthquakes that occur before the mainshock are called foreshocks while those that occur after the mainshock are called aftershocks. A mainshock will be redefined as a foreshock if a subsequent event in the cluster has a larger magnitude. Aftershock sequences follow predictable patterns. That is, a sequence of aftershocks follows certain global patterns as a group, but the individual earthquakes comprising the group are random and unpredictable. This relationship between the pattern of a group and the randomness (stochastic nature) of the individuals has a close parallel in actuarial statistics. We can describe the pattern that aftershock sequences tend to follow with well-constrained equations. However, we must keep in mind that the actual aftershocks are only probabilistically described by these equations. Once the parameters in these equations have been estimated, we can determine the probability of aftershocks occurring in various space, time and magnitude ranges as described below. Clustering of earthquakes usually occurs near the location of the mainshock. The stress on the mainshock's fault changes drastically during the mainshock and that fault produces most of the aftershocks. This causes a change in the regional stress, the size of which decreases rapidly with distance from the mainshock. Sometimes the change in stress caused by the mainshock is great enough to trigger aftershocks on other, nearby faults. While there is no hard "cutoff" distance beyond which an earthquake is totally incapable of triggering an aftershock, the vast majority of aftershocks are located close to the mainshock. As a rule of thumb, we consider earthquakes to be

  5. Stress Drop and Depth Controls on Ground Motion From Induced Earthquakes

    NASA Astrophysics Data System (ADS)

    Baltay, A.; Rubinstein, J. L.; Terra, F. M.; Hanks, T. C.; Herrmann, R. B.

    2015-12-01

    Induced earthquakes in the central United States pose a risk to local populations, but there is not yet agreement on how to portray their hazard. A large source of uncertainty in the hazard arises from ground motion prediction, which depends on the magnitude and distance of the causative earthquake. However, ground motion models for induced earthquakes may be very different than models previously developed for either the eastern or western United States. A key question is whether ground motions from induced earthquakes are similar to those from natural earthquakes, yet there is little history of natural events in the same region with which to compare the induced ground motions. To address these problems, we explore how earthquake source properties, such as stress drop or depth, affect the recorded ground motion of induced earthquakes. Typically, due to stress drop increasing with depth, ground motion prediction equations model shallower events to have smaller ground motions, when considering the same absolute hypocentral distance to the station. Induced earthquakes tend to occur at shallower depths, with respect to natural eastern US earthquakes, and may also exhibit lower stress drops, which begs the question of how these two parameters interact to control ground motion. Can the ground motions of induced earthquakes simply be understood by scaling our known source-ground motion relations to account for the shallow depth or potentially smaller stress drops of these induced earthquakes, or is there an inherently different mechanism in play for these induced earthquakes? We study peak ground-motion velocity (PGV) and acceleration (PGA) from induced earthquakes in Oklahoma and Kansas, recorded by USGS networks at source-station distances of less than 20 km, in order to model the source effects. We compare these records to those in both the NGA-West2 database (primarily from California) as well as NGA-East, which covers the central and eastern United States and Canada

  6. Dynamics of folding: Impact of fault bend folds on earthquake cycles

    NASA Astrophysics Data System (ADS)

    Sathiakumar, S.; Barbot, S.; Hubbard, J.

    2017-12-01

    Earthquakes in subduction zones and subaerial convergent margins are some of the largest in the world. So far, forecasts of future earthquakes have primarily relied on assessing past earthquakes to look for seismic gaps and slip deficits. However, the roles of fault geometry and off-fault plasticity are typically overlooked. We use structural geology (fault-bend folding theory) to inform fault modeling in order to better understand how deformation is accommodated on the geological time scale and through the earthquake cycle. Fault bends in megathrusts, like those proposed for the Nepal Himalaya, will induce folding of the upper plate. This introduces changes in the slip rate on different fault segments, and therefore on the loading rate at the plate interface, profoundly affecting the pattern of earthquake cycles. We develop numerical simulations of slip evolution under rate-and-state friction and show that this effect introduces segmentation of the earthquake cycle. In crustal dynamics, it is challenging to describe the dynamics of fault-bend folds, because the deformation is accommodated by small amounts of slip parallel to bedding planes ("flexural slip"), localized on axial surface, i.e. folding axes pinned to fault bends. We use dislocation theory to describe the dynamics of folding along these axial surfaces, using analytic solutions that provide displacement and stress kernels to simulate the temporal evolution of folding and assess the effects of folding on earthquake cycles. Studies of the 2015 Gorkha earthquake, Nepal, have shown that fault geometry can affect earthquake segmentation. Here, we show that in addition to the fault geometry, the actual geology of the rocks in the hanging wall of the fault also affect critical parameters, including the loading rate on parts of the fault, based on fault-bend folding theory. Because loading velocity controls the recurrence time of earthquakes, these two effects together are likely to have a strong impact on the

  7. Modeling earthquake rate changes in Oklahoma and Arkansas: possible signatures of induced seismicity

    USGS Publications Warehouse

    Llenos, Andrea L.; Michael, Andrew J.

    2013-01-01

    The rate of ML≥3 earthquakes in the central and eastern United States increased beginning in 2009, particularly in Oklahoma and central Arkansas, where fluid injection has occurred. We find evidence that suggests these rate increases are man‐made by examining the rate changes in a catalog of ML≥3 earthquakes in Oklahoma, which had a low background seismicity rate before 2009, as well as rate changes in a catalog of ML≥2.2 earthquakes in central Arkansas, which had a history of earthquake swarms prior to the start of injection in 2009. In both cases, stochastic epidemic‐type aftershock sequence models and statistical tests demonstrate that the earthquake rate change is statistically significant, and both the background rate of independent earthquakes and the aftershock productivity must increase in 2009 to explain the observed increase in seismicity. This suggests that a significant change in the underlying triggering process occurred. Both parameters vary, even when comparing natural to potentially induced swarms in Arkansas, which suggests that changes in both the background rate and the aftershock productivity may provide a way to distinguish man‐made from natural earthquake rate changes. In Arkansas we also compare earthquake and injection well locations, finding that earthquakes within 6 km of an active injection well tend to occur closer together than those that occur before, after, or far from active injection. Thus, like a change in productivity, a change in interevent distance distribution may also be an indicator of induced seismicity.

  8. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    PubMed

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  9. Low frequency (<1Hz) Large Magnitude Earthquake Simulations in Central Mexico: the 1985 Michoacan Earthquake and Hypothetical Rupture in the Guerrero Gap

    NASA Astrophysics Data System (ADS)

    Ramirez Guzman, L.; Contreras Ruíz Esparza, M.; Aguirre Gonzalez, J. J.; Alcántara Noasco, L.; Quiroz Ramírez, A.

    2012-12-01

    kinematic rupture for a Mw=8.4 earthquake with the method of Liu et al. (2006) for the Guerrero Gap and computed the ground motion. We summarized our results by presenting ground motion parameter maps and the potential population and infrastructure exposure to large Modified Mercalli Intensities (>VI).

  10. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  11. In-situ investigation of relations between slow slip events, repeaters and earthquake nucleation

    NASA Astrophysics Data System (ADS)

    Marty, S. B.; Schubnel, A.; Gardonio, B.; Bhat, H. S.; Fukuyama, E.

    2017-12-01

    Recent observations have shown that, in subduction zones, imperceptible slip, known as "slow slip events", could trigger powerful earthquakes and could be link to the onset of swarms of repeaters. In the aim of investigating the relation between repeaters, slow slip events and earthquake nucleation, we have conducted stick-slip experiments on saw-cut Indian Gabbro under upper crustal stress conditions (up to 180 MPa confining pressure). During the past decades, the reproduction of micro-earthquakes in the laboratory enabled a better understanding and to better constrain physical parameters that are the origin of the seismic source. Using a new set of calibrated piezoelectric acoustic emission sensors and high frequency dynamic strain gages, we are now able to measure a large number of physical parameters during stick-slip motion, such as the rupture velocity, the slip velocity, the dynamic stress drop and the absolute magnitudes and sizes of foreshock acoustic emissions. Preliminary observations systemically show quasi-static slip accelerations, onset of repeaters as well as an increase in the acoustic emission rate before failure. In the next future, we will further investigate the links between slow slip events, repeaters, stress build-up and earthquakes, using our high-frequency acoustic and strain recordings and applying template matching analysis.

  12. A Cooperative Test of the Load/Unload Response Ratio Proposed Method of Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Trotta, J. E.; Tullis, T. E.

    2004-12-01

    The Load/Unload Response Ratio (LURR) method is a proposed technique to predict earthquakes that was first put forward by Yin in 1984 (Yin, 1987). LURR is based on the idea that when a region is near failure, there is an increase in the rate of seismic activity during loading of the tidal cycle relative to the rate of seismic activity during unloading of the tidal cycle. Typically the numerator of the LURR ratio is the number, or the sum of some measure of the size (e.g. Benioff strain), of small earthquakes that occur during loading of the tidal cycle, whereas the denominator is the same as the numerator except it is calculated during unloading. LURR method suggests this ratio should increase in the months to year preceding a large earthquake. Regions near failure have tectonic stresses nearly high enough for a large earthquake to occur, thus it seems more likely that smaller earthquakes in the region would be triggered when the tidal stresses add to the tectonic ones. However, until recently even the most careful studies suggested that the effect of tidal stresses on earthquake occurrence is very small and difficult to detect. New studies have shown that there is a tidal triggering effect on shallow thrust faults in areas with strong tides from ocean loading (Tanaka et al., 2002; Cochran et al., 2004). We have been conducting an independent test of the LURR method, since there would be important scientific and social implications if the LURR method were proven to be a robust method of earthquake prediction. Smith and Sammis (2003) also undertook a similar study. Following both the parameters of Yin et al. (2000) and the somewhat different ones of Smith and Sammis (2003), we have repeated calculations of LURR for the Northridge and Loma Prieta earthquakes in California. Though we have followed both sets of parameters closely, we have been unable to reproduce either set of results. A general agreement was made at the recent ACES Workshop in China between research

  13. Fractal analysis of earthquake swarms of Vogtland/NW-Bohemia intraplate seismicity

    NASA Astrophysics Data System (ADS)

    Mittag, Reinhard J.

    2003-03-01

    The special type of intraplate microseismicity with swarm-like occurrence of earthquakes within the Vogtland/NW-Bohemian Region is analysed to reveal the nature and the origin of the seismogenic regime. The long-term data set of continuous seismic monitoring since 1962, including more than 26000 events within a range of about 5 units of local magnitude, provides an unique database for statistical investigations. Most earthquakes occur in narrow hypocentral volumes (clusters) within the lower part of the upper crust, but also single event occurrence outside of spatial clusters is observed. Temporal distribution of events is concentrated in clusters (swarms), which last some days until few month in dependence of intensity. Since 1962 three strong swarms occurred (1962, 1985/86, 2000), including two seismic cycles. Spatial clusters are distributed along a fault system of regional extension (Leipzig-Regensburger Störung), which is supposed to act as the joint tectonic fracture zone for the whole seismogenic region. Seismicity is analysed by fractal analysis, suggesting a unifractal behaviour of seismicity and uniform character of seismotectonic regime for the whole region. A tendency of decreasing fractal dimension values is observed for temporal distribution of earthquakes, indicating an increasing degree of temporal clustering from swarm to swarm. Following the idea of earthquake triggering by magma intrusions and related fluid and gas release into the tectonically pre-stressed parts of the crust, a steady increased intensity of intrusion and/or fluid and gas release might account for that observation. Additionally, seismic parameters for Vogtland/NW-Bohemia intraplate seismicity are compared with an adequate data set of mining-induced seismicity in a nearby mine of Lubin/Poland and with synthetic data sets to evaluate parameter estimation. Due to different seismogenic regime of tectonic and induced seismicity, significant differences between b-values and temporal

  14. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human

  15. The 2007 Boso Slow Slip Event and the associated earthquake swarm

    NASA Astrophysics Data System (ADS)

    Sekine, S.; Hirose, H.; Kimura, H.; Obara, K.

    2007-12-01

    In the Boso Peninsula, which is located in southeast of the Japan mainland, slow slip events (SSE) have been observed by the GEONET GPS array operated by the Geographical Survey Institute Japan and the NIED tiltmeter network every 6-7 years (Ozawa et al.,2003; NIED 2003). The unique characteristics of the Boso SSE are that earthquake swarm activities have also occurred in association with the SSE. The latest activity of the SSE and the earthquake swarm took place in August 2007. On 13th August, an earthquake swarm began to occur at east off Boso Peninsula and the slow tilt deformations also started. The earthquake sources migrated to the NNE direction, which is the same direction of the relative plate motion of the subducting Philippine Sea Plate with respect to the overriding plate. The largest earthquake in this episode (Mw 5.3) occurred on 16th and the second largest one (Mw 5.2) on 18th. Most of the larger earthquakes show low- angle thrust type focal mechanisms that are consistent with the plate motion and the geometry of the subduction plate interface. The tilt changes seem to stop on 17th and the activity of the swarm rapidly decreases after 19th. The maximum tilt change of 0.8 micro radian with northwest down tilting was observed at KT2H, the nearest station from the source region. Based on the tilt records around Boso Peninsula, we estimate a fault model for the SSE using genetic algorithm inversion to non-linear parameter and the weighted least squares method to linear parameters. As a result, the estimated moment magnitude and the amount of slip are 6.4 and 10 cm, respectively. The size and the location of the SSE are similar to the previous episodes. The estimated fault plane is very consistent with the configuration of the plate interface (Kimura et al., 2006). Most of the earthquakes are located on the deeper edge of the estimated SSE fault area. The coincidence of the swarm and the SSE suggests a causal relation between them and may help us to

  16. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event

  17. Premonitory slip and tidal triggering of earthquakes

    USGS Publications Warehouse

    Lockner, D.A.; Beeler, N.M.

    1999-01-01

    We have conducted a series of laboratory simulations of earthquakes using granite cylinders containing precut bare fault surfaces at 50 MPa confining pressure. Axial shortening rates between 10-4 and 10-6 mm/s were imposed to simulate tectonic loading. Average loading rate was then modulated by the addition of a small-amplitude sine wave to simulate periodic loading due to Earth tides or other sources. The period of the modulating signal ranged from 10 to 10,000 s. For each combination of amplitude and period of the modulating signal, multiple stick-slip events were recorded to determine the degree of correlation between the timing of simulated earthquakes and the imposed periodic loading function. Over the range of parameters studied, the degree of correlation of earthquakes was most sensitive to the amplitude of the periodic loading, with weaker dependence on the period of oscillations and the average loading rate. Accelerating premonitory slip was observed in these experiments and is a controlling factor in determining the conditions under which correlated events occur. In fact, some form of delayed failure is necessary to produce the observed correlations between simulated earthquake timing and characteristics of the periodic loading function. The transition from strongly correlated to weakly correlated model earthquake populations occurred when the amplitude of the periodic loading was approximately 0.05 to 0.1 MPa shear stress (0.03 to 0.06 MPa Coulomb failure function). Lower-amplitude oscillations produced progressively lower correlation levels. Correlations between static stress increases and earthquake aftershocks are found to degrade at similar stress levels. Typical stress variations due to Earth tides are only 0.001 to 0.004 MPa, so that the lack of correlation between Earth tides and earthquakes is also consistent with our findings. A simple extrapolation of our results suggests that approximately 1% of midcrustal earthquakes should be correlated with

  18. The relationship between earthquake exposure and posttraumatic stress disorder in 2013 Lushan earthquake

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Lu, Yi

    2018-01-01

    The objective of this study is to explore the relationship between earthquake exposure and the incidence of PTSD. A stratification random sample survey was conducted to collect data in the Longmenshan thrust fault after Lushan earthquake three years. We used the Children's Revised Impact of Event Scale (CRIES-13) and the Earthquake Experience Scale. Subjects in this study included 3944 school student survivors in local eleven schools. The prevalence of probable PTSD is relatively higher, when the people was trapped in the earthquake, was injured in the earthquake or have relatives who died in the earthquake. It concluded that researchers need to pay more attention to the children and adolescents. The government should pay more attention to these people and provide more economic support.

  19. Crowdsourced earthquake early warning.

    PubMed

    Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L

    2015-04-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  20. Crowdsourced earthquake early warning

    PubMed Central

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167

  1. Earthquakes, November-December 1992

    USGS Publications Warehouse

    Person, W.J.

    1993-01-01

    There were two major earthquakes (7.0≤M<8.0) during the last two months of the year, a magntidue 7.5 earthquake on December 12 in the Flores region, Indonesia, and a magnitude 7.0 earthquake on December 20 in the Banda Sea. Earthquakes caused fatalities in China and Indonesia. The greatest number of deaths (2,500) for the year occurred in Indonesia. In Switzerland, six people were killed by an accidental explosion recoreded by seismographs. In teh United States, a magnitude 5.3 earthquake caused slight damage at Big Bear in southern California. 

  2. Crowdsourced earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  3. Web Services and Other Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2012-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These

  4. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  5. A new multi-parametric climatological approach to the study of the earthquake preparatory phase: the 2016 Amatrice-Norcia (Central Italy) seismic sequence

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; De Santis, Angelo; Marchetti, Dedalo; Cianchini, Gianfranco

    2017-04-01

    Based on observations prior to earthquakes, recent theoretical considerations suggest that some geophysical quantities reveal abnormal changes that anticipate moderate and strong earthquakes, within a defined spatial area (the so-called Dobrovolsky area) according to a Lithosphere-Atmosphere-Ionosphere coupling (LAIC) model. One of the possible pre-earthquake effects could be the appearance of some climatological anomalies in the epicentral region, weeks/months before the major earthquakes. An ESA-funded project, SAFE (Swarm for Earthquake study) was dedicated to investigate the LAIC from ground to satellite. In this work, the period of two months preceding the Amatrice-Norcia (Central Italy) earthquake sequence that started on 24 August 2016 with an M6 earthquake, and some months later produced other two major shocks, i.e. an M5.9 on 26 October and then an M6.5 on 30 October, was analyzed in terms of some climatological parameters. In particular, starting from a date preceding of about two months the first major shock, we applied a new approach based on the comparison of the thirty-seven year time series at the same seasonal time of three land/atmospheric parameters, i.e. skin temperature (skt), total column water vapour (tcwv) and total column of ozone (tco3), collected from European Center Medium Weather Forecast (ECMWF), and the year in which the earthquake sequence occurred. The originality of the method stands in the way the complete time series is reduced, where also the possible effect of global warming is properly removed. A confutation/confirmation analysis was undertaken where these parameters were successfully analyzed in the same months but considering two seismically "calm" years, when significant seismicity was not present, in order to validate the technique. We also extended the analysis to all available years to construct a confusion matrix comparing the climatological anomalies with the real seismicity. This latter analysis has confirmed the

  6. Broadband Analysis of the Energetics of Earthquakes and Tsunamis in the Sunda Forearc from 1987-2012

    NASA Astrophysics Data System (ADS)

    Choy, G. L.; Kirby, S. H.; Hayes, G. P.

    2013-12-01

    In the eighteen years before the 2004 Sumatra Mw 9.1 earthquake, the forearc off Sumatra experienced only one large (Mw > 7.0) thrust event and experienced no earthquakes that generated measurable tsunami wave heights. In the subsequent eight years, twelve large thrust earthquakes occurred of which half generated measurable tsunamis. The number of broadband earthquakes (those events with Mw > 5.5 for which broadband teleseismic waveforms have sufficient signal to compute depths, focal mechanisms, moments and radiated energies) jumped six fold after 2004. The progression of tsunami earthquakes, as well as the profuse increase in broadband activity, strongly suggests regional stress adjustments following the Sumatra 2004 megathrust earthquake. Broadband source parameters, published routinely in the Source Parameters (SOPAR) database of the USGS's NEIC (National Earthquake Information Center), have provided the most accurate depths and locations of big earthquakes since the implementation of modern digital seismographic networks. Moreover, radiated energy and seismic moment (also found in SOPAR) are related to apparent stress which is a measure of fault maturity. In mapping apparent stress as a function of depth and focal mechanism, we find that about 12% of broadband thrust earthquakes in the subduction zone are unequivocally above or below the slab interface. Apparent stresses of upper-plate events are associated with failure on mature splay faults, some of which generated measurable tsunamis. One unconventional source for local wave heights was a large intraslab earthquake. High-energy upper-plate events, which are dominant in the Aceh Basin, are associated with immature faults, which may explain why the region was bypassed by significant rupture during the 2004 Sumatra earthquake. The majority of broadband earthquakes are non-randomly concentrated under the outer-arc high. They appear to delineate the periphery of the contiguous rupture zones of large earthquakes

  7. Prototype operational earthquake prediction system

    USGS Publications Warehouse

    Spall, Henry

    1986-01-01

    An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predicting earthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquake prediction system in southern California.

  8. Perception of earthquake risk in Taiwan: effects of gender and past earthquake experience.

    PubMed

    Kung, Yi-Wen; Chen, Sue-Huei

    2012-09-01

    This study explored how individuals in Taiwan perceive the risk of earthquake and the relationship of past earthquake experience and gender to risk perception. Participants (n= 1,405), including earthquake survivors and those in the general population without prior direct earthquake exposure, were selected and interviewed through a computer-assisted telephone interviewing procedure using a random sampling and stratification method covering all 24 regions of Taiwan. A factor analysis of the interview data yielded a two-factor structure of risk perception in regard to earthquake. The first factor, "personal impact," encompassed perception of threat and fear related to earthquakes. The second factor, "controllability," encompassed a sense of efficacy of self-protection in regard to earthquakes. The findings indicated prior earthquake survivors and females reported higher scores on the personal impact factor than males and those with no prior direct earthquake experience, although there were no group differences on the controllability factor. The findings support that risk perception has multiple components, and suggest that past experience (survivor status) and gender (female) affect the perception of risk. Exploration of potential contributions of other demographic factors such as age, education, and marital status to personal impact, especially for females and survivors, is discussed. Future research on and intervention program with regard to risk perception are suggested accordingly. © 2012 Society for Risk Analysis.

  9. Recognition of strong earthquake-prone areas with a single learning class

    NASA Astrophysics Data System (ADS)

    Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.

    2017-05-01

    This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.

  10. Earthquakes in Alaska

    USGS Publications Warehouse

    Haeussler, Peter J.; Plafker, George

    1995-01-01

    Earthquake risk is high in much of the southern half of Alaska, but it is not the same everywhere. This map shows the overall geologic setting in Alaska that produces earthquakes. The Pacific plate (darker blue) is sliding northwestward past southeastern Alaska and then dives beneath the North American plate (light blue, green, and brown) in southern Alaska, the Alaska Peninsula, and the Aleutian Islands. Most earthquakes are produced where these two plates come into contact and slide past each other. Major earthquakes also occur throughout much of interior Alaska as a result of collision of a piece of crust with the southern margin.

  11. Finite Moment Tensors of Southern California Earthquakes

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Chen, P.; Zhao, L.

    2003-12-01

    We have developed procedures for inverting broadband waveforms for the finite moment tensors (FMTs) of regional earthquakes. The FMT is defined in terms of second-order polynomial moments of the source space-time function and provides the lowest order representation of a finite fault rupture; it removes the fault-plane ambiguity of the centroid moment tensor (CMT) and yields several additional parameters of seismological interest: the characteristic length L{c}, width W{c}, and duration T{c} of the faulting, as well as the directivity vector {v}{d} of the fault slip. To formulate the inverse problem, we follow and extend the methods of McGuire et al. [2001, 2002], who have successfully recovered the second-order moments of large earthquakes using low-frequency teleseismic data. We express the Fourier spectra of a synthetic point-source waveform in its exponential (Rytov) form and represent the observed waveform relative to the synthetic in terms two frequency-dependent differential times, a phase delay δ τ {p}(ω ) and an amplitude-reduction time δ τ {q}(ω ), which we measure using Gee and Jordan's [1992] isolation-filter technique. We numerically calculate the FMT partial derivatives in terms of second-order spatiotemporal gradients, which allows us to use 3D finite-difference seismograms as our isolation filters. We have applied our methodology to a set of small to medium-sized earthquakes in Southern California. The errors in anelastic structure introduced perturbations larger than the signal level caused by finite source effect. We have therefore employed a joint inversion technique that recovers the CMT parameters of the aftershocks, as well as the CMT and FMT parameters of the mainshock, under the assumption that the source finiteness of the aftershocks can be ignored. The joint system of equations relating the δ τ {p} and δ τ {q} data to the source parameters of the mainshock-aftershock cluster is denuisanced for path anomalies in both observables

  12. An energy dependent earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    Spassiani, I.; Marzocchi, W.

    2017-12-01

    The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.

  13. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  14. The effect of segmented fault zones on earthquake rupture propagation and termination

    NASA Astrophysics Data System (ADS)

    Huang, Y.

    2017-12-01

    A fundamental question in earthquake source physics is what can control the nucleation and termination of an earthquake rupture. Besides stress heterogeneities and variations in frictional properties, damaged fault zones (DFZs) that surround major strike-slip faults can contribute significantly to earthquake rupture propagation. Previous earthquake rupture simulations usually characterize DFZs as several-hundred-meter-wide layers with lower seismic velocities than host rocks, and find earthquake ruptures in DFZs can exhibit slip pulses and oscillating rupture speeds that ultimately enhance high-frequency ground motions. However, real DFZs are more complex than the uniform low-velocity structures, and show along-strike variations of damages that may be correlated with historical earthquake ruptures. These segmented structures can either prohibit or assist rupture propagation and significantly affect the final sizes of earthquakes. For example, recent dense array data recorded at the San Jacinto fault zone suggests the existence of three prominent DFZs across the Anza seismic gap and the south section of the Clark branch, while no prominent DFZs were identified near the ends of the Anza seismic gap. To better understand earthquake rupture in segmented fault zones, we will present dynamic rupture simulations that calculate the time-varying rupture process physically by considering the interactions between fault stresses, fault frictional properties, and material heterogeneities. We will show that whether an earthquake rupture can break through the intact rock outside the DFZ depend on the nucleation size of the earthquake and the rupture propagation distance in the DFZ. Moreover, material properties of the DFZ, stress conditions along the fault, and friction properties of the fault also have a critical impact on rupture propagation and termination. We will also present scenarios of San Jacinto earthquake ruptures and show the parameter space that is favorable for

  15. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  16. Sensitivity analysis of the FEMA HAZUS-MH MR4 Earthquake Model using seismic events affecting King County Washington

    NASA Astrophysics Data System (ADS)

    Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.

    2010-12-01

    HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region

  17. Source processes of strong earthquakes in the North Tien-Shan region

    NASA Astrophysics Data System (ADS)

    Kulikova, G.; Krueger, F.

    2013-12-01

    compared the amplitude ratios (between P, PP, S and SS) of the real data and the simulated seismograms. At first, the depth and the focal mechanism of the earthquakes were determined based on the amplitude ratios for the point source. Further, on the base of ISOLA software, we developed an application which calculates kinematic source parameters for historical earthquakes without restitution. Based on sub-events approach kinematic source parameters could be determined for a subset of the events. We present the results for five major instrumentally recorded earthquake in North Tien-Shan. The strongest one was the Chon-Kemin earthquake on 3rd January 1911. Its relocated epicenter is 42.98N and 77.33E - 80 kilometer southward from the catalog location. The depth is determined to be 28 km. The obtained focal mechanism shows strike, dip, and slip angles of 44°, 82°,and 56°, respectively. The moment magnitude is calculated to be Mw 8.1. The source time duration is 45 s which gives about 120 km rupture length.

  18. Risk Factors of Acute Pancreatitis in Oral Double Balloon Enteroscopy.

    PubMed

    Kopáčová, Marcela; Bureš, Jan; Rejchrt, Stanislav; Vávrová, Jaroslava; Bártová, Jolana; Soukup, Tomáš; Tomš, Jan; Tachecí, Ilja

    Double balloon enteroscopy (DBE) was introduced 15 years ago. The complications of diagnostic DBE are rare, acute pancreatitis is most redoubtable one (incidence about 0.3%). Hyperamylasemia after DBE seems to be a rather common condition respectively. The most probable cause seems to be a mechanical straining of the pancreas. We tried to identify patients in a higher risk of acute pancreatitis after DBE. We investigated several laboratory markers before and after DBE (serum cathepsin B, lactoferrin, E-selectin, SPINK 1, procalcitonin, S100 proteins, alfa-1-antitrypsin, hs-CRP, malondialdehyde, serum and urine amylase and serum lipase). Serum amylase and lipase rose significantly with the maximum 4 hours after DBE. Serum cathepsin and procalcitonin decreased significantly 4 hours after DBE compared to healthy controls and patients values before DBE. Either serum amylase or lipase 4 hours after DBE did not correlate with any markers before DBE. There was a trend for an association between the number of push-and-pull cycles and procalcitonin and urine amylase 4 hours after DBE; between procalcitonin and alfa-1-antitrypsin, cathepsin and hs-CRP; and between E-selectin and malondialdehyde 4 hours after DBE. We found no laboratory markers determinative in advance those patients in a higher risk of acute pancreatitis after DBE.

  19. Earthquakes, November-December 1973

    USGS Publications Warehouse

    Person, W.J.

    1974-01-01

    Other parts of the world suffered fatalities and significant damage from earthquakes. In Iran, an earthquake killed one person, injured many, and destroyed a number of homes. Earthquake fatalities also occurred in the Azores and in Algeria. 

  20. Atmosphere-Ionosphere Response to the M9 Tohoku Earthquake Revealed by Multi-Instrument Space-Borne and Ground Observations. Preliminary Results

    NASA Technical Reports Server (NTRS)

    Ouzounov, Dimitar; Pulinets, Sergey; Romanov, Alexey; Romanov, Alexander; Tsbulya, Konstantin; Davidenko, Dmitri; Kafatos, Menas; Taylor, Patrick

    2011-01-01

    We retrospectively analyzed the temporal and spatial variations of four different physical parameters characterizing the state of the atmosphere and ionosphere several days before the M9 Tohoku Japan earthquake of March 11, 2011. Data include outgoing long wave radiation (OLR), GPS/TEC, Low-Earth orbit ionospheric tomography and critical frequency foF2. Our first results show that on March 8th a rapid increase of emitted infrared radiation was observed from the satellite data and an anomaly developed near the epicenter. The GPS/TEC data indicate an increase and variation in electron density reaching a maximum value on March 8. Starting on this day in the lower ionospheric there was also confirmed an abnormal TEC variation over the epicenter. From March 3-11 a large increase in electron concentration was recorded at all four Japanese ground based ionosondes, which returned to normal after the main earthquake The joined preliminary analysis of atmospheric and ionospheric parameters during the M9 Tohoku Japan earthquake has revealed the presence of related variations of these parameters implying their connection with the earthquake process. This study may lead to a better understanding of the response of the atmosphere/ionosphere to the Great Tohoku earthquake.

  1. Comparative study of earthquake-related and non-earthquake-related head traumas using multidetector computed tomography

    PubMed Central

    Chu, Zhi-gang; Yang, Zhi-gang; Dong, Zhi-hui; Chen, Tian-wu; Zhu, Zhi-yu; Shao, Heng

    2011-01-01

    OBJECTIVE: The features of earthquake-related head injuries may be different from those of injuries obtained in daily life because of differences in circumstances. We aim to compare the features of head traumas caused by the Sichuan earthquake with those of other common head traumas using multidetector computed tomography. METHODS: In total, 221 patients with earthquake-related head traumas (the earthquake group) and 221 patients with other common head traumas (the non-earthquake group) were enrolled in our study, and their computed tomographic findings were compared. We focused the differences between fractures and intracranial injuries and the relationships between extracranial and intracranial injuries. RESULTS: More earthquake-related cases had only extracranial soft tissue injuries (50.7% vs. 26.2%, RR = 1.9), and fewer cases had intracranial injuries (17.2% vs. 50.7%, RR = 0.3) compared with the non-earthquake group. For patients with fractures and intracranial injuries, there were fewer cases with craniocerebral injuries in the earthquake group (60.6% vs. 77.9%, RR = 0.8), and the earthquake-injured patients had fewer fractures and intracranial injuries overall (1.5±0.9 vs. 2.5±1.8; 1.3±0.5 vs. 2.1±1.1). Compared with the non-earthquake group, the incidences of soft tissue injuries and cranial fractures combined with intracranial injuries in the earthquake group were significantly lower (9.8% vs. 43.7%, RR = 0.2; 35.1% vs. 82.2%, RR = 0.4). CONCLUSION: As depicted with computed tomography, the severity of earthquake-related head traumas in survivors was milder, and isolated extracranial injuries were more common in earthquake-related head traumas than in non-earthquake-related injuries, which may have been the result of different injury causes, mechanisms and settings. PMID:22012045

  2. Investigation on the Possible Relationship between Magnetic Pulsations and Earthquakes

    NASA Astrophysics Data System (ADS)

    Jusoh, M.; Liu, H.; Yumoto, K.; Uozumi, T.; Takla, E. M.; Yousif Suliman, M. E.; Kawano, H.; Yoshikawa, A.; Asillam, M.; Hashim, M.

    2012-12-01

    The sun is the main source of energy to the solar system, and it plays a major role in affecting the ionosphere, atmosphere and the earth surface. The connection between solar wind and the ground magnetic pulsations has been proven empirically by several researchers previously (H. J. Singer et al., 1977, E. W. Greenstadt, 1979, I. A. Ansari 2006 to name a few). In our preliminary statistical analysis on relationship between solar and seismic activities (Jusoh and Yumoto, 2011, Jusoh et al., 2012), we observed a high possibility of solar-terrestrial coupling. We observed high tendency of earthquakes to occur during lower phase solar cycles which significantly related with solar wind parameters (i.e solar wind dynamic pressure, speed and input energy). However a clear coupling mechanism was not established yet. To connect the solar impact on seismicity, we investigate the possibility of ground magnetic pulsations as one of the connecting agent. In our analysis, the recorded ground magnetic pulsations are analyzed at different ranges of ultra low frequency; Pc3 (22-100 mHz), Pc4 (6.7-22 mHz) and Pc5 (1.7-6.7 mHz) with the occurrence of local earthquake events at certain time periods. This analysis focuses at 2 different major seismic regions; north Japan (mid latitude) and north Sumatera, Indonesia (low latitude). Solar wind parameters were obtained from the Goddard Space Flight Center, NASA via the OMNIWeb Data Explorer and the Space Physics Data Facility. Earthquake events were extracted from the Advanced National Seismic System (ANSS) database. The localized Pc3-Pc5 magnetic pulsations data were extracted from Magnetic Data Acquisition System (MAGDAS)/Circum Pan Magnetic Network (CPMN) located at Ashibetsu (Japan); for earthquakes monitored at north Japan and Langkawi (Malaysia); for earthquakes observed at north Sumatera. This magnetometer arrays has established by International Center for Space Weather Science and Education, Kyushu University, Japan. From the

  3. Modeling of a historical earthquake in Erzincan, Turkey (Ms 7.8, in 1939) using regional seismological information obtained from a recent event

    NASA Astrophysics Data System (ADS)

    Karimzadeh, Shaghayegh; Askan, Aysegul

    2018-04-01

    Located within a basin structure, at the conjunction of North East Anatolian, North Anatolian and Ovacik Faults, Erzincan city center (Turkey) is one of the most hazardous regions in the world. Combination of the seismotectonic and geological settings of the region has resulted in series of significant seismic activities including the 1939 (Ms 7.8) as well as the 1992 (Mw = 6.6) earthquakes. The devastative 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms. Thus, a limited number of studies exist on that earthquake. However, the 1992 event, despite the sparse local network at that time, has been studied extensively. This study aims to simulate the 1939 Erzincan earthquake using available regional seismic and geological parameters. Despite several uncertainties involved, such an effort to quantitatively model the 1939 earthquake is promising, given the historical reports of extensive damage and fatalities in the area. The results of this study are expressed in terms of anticipated acceleration time histories at certain locations, spatial distribution of selected ground motion parameters and felt intensity maps in the region. Simulated motions are first compared against empirical ground motion prediction equations derived with both local and global datasets. Next, anticipated intensity maps of the 1939 earthquake are obtained using local correlations between peak ground motion parameters and felt intensity values. Comparisons of the estimated intensity distributions with the corresponding observed intensities indicate a reasonable modeling of the 1939 earthquake.

  4. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  5. Earthquakes, May-June 1991

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    In the United States, a magnitude 5.8 earthquake in southern California on June 28 killed two people and caused considerable damage. Strong earthquakes hit Alaska on May 1 and May 30; the May 1 earthquake caused some minor damage. 

  6. Space-borne Observations of Atmospheric Pre-Earthquake Signals in Seismically Active Areas: Case Study for Greece 2008-2009

    NASA Technical Reports Server (NTRS)

    Ouzounov, D. P.; Pulinets, S. A.; Davidenko, D. A.; Kafatos, M.; Taylor, P. T.

    2013-01-01

    We are conducting theoretical studies and practical validation of atm osphere/ionosphere phenomena preceding major earthquakes. Our approach is based on monitoring of two physical parameters from space: outgoi ng long-wavelength radiation (OLR) on the top of the atmosphere and e lectron and electron density variations in the ionosphere via GPS Tot al Electron Content (GPS/TEC). We retrospectively analyzed the temporal and spatial variations of OLR an GPS/TEC parameters characterizing the state of the atmosphere and ionosphere several days before four m ajor earthquakes (M>6) in Greece for 2008-2009: M6.9 of 02.12.08, M6. 2 02.20.08; M6.4 of 06.08.08 and M6.4 of 07.01.09.We found anomalous behavior before all of these events (over land and sea) over regions o f maximum stress. We expect that our analysis reveal the underlying p hysics of pre-earthquake signals associated with some of the largest earthquakes in Greece.

  7. Nowcasting Earthquakes: A Comparison of Induced Earthquakes in Oklahoma and at the Geysers, California

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.

    2018-01-01

    Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.

  8. International Collaboration for Strengthening Capacity to Assess Earthquake Hazard in Indonesia

    NASA Astrophysics Data System (ADS)

    Cummins, P. R.; Hidayati, S.; Suhardjono, S.; Meilano, I.; Natawidjaja, D.

    2012-12-01

    Indonesia has experienced a dramatic increase in earthquake risk due to rapid population growth in the 20th century, much of it occurring in areas near the subduction zone plate boundaries that are prone to earthquake occurrence. While recent seismic hazard assessments have resulted in better building codes that can inform safer building practices, many of the fundamental parameters controlling earthquake occurrence and ground shaking - e.g., fault slip rates, earthquake scaling relations, ground motion prediction equations, and site response - could still be better constrained. In recognition of the need to improve the level of information on which seismic hazard assessments are based, the Australian Agency for International Development (AusAID) and Indonesia's National Agency for Disaster Management (BNPB), through the Australia-Indonesia Facility for Disaster Reduction, have initiated a 4-year project designed to strengthen the Government of Indonesia's capacity to reliably assess earthquake hazard. This project is a collaboration of Australian institutions including Geoscience Australia and the Australian National University, with Indonesian government agencies and universities including the Agency for Meteorology, Climatology and Geophysics, the Geological Agency, the Indonesian Institute of Sciences, and Bandung Institute of Technology. Effective earthquake hazard assessment requires input from many different types of research, ranging from geological studies of active faults, seismological studies of crustal structure, earthquake sources and ground motion, PSHA methodology, and geodetic studies of crustal strain rates. The project is a large and diverse one that spans all these components, and these will be briefly reviewed in this presentation

  9. The 1906 earthquake and a century of progress in understanding earthquakes and their hazards

    USGS Publications Warehouse

    Zoback, M.L.

    2006-01-01

    The 18 April 1906 San Francisco earthquake killed nearly 3000 people and left 225,000 residents homeless. Three days after the earthquake, an eight-person Earthquake Investigation Commission composed of 25 geologists, seismologists, geodesists, biologists and engineers, as well as some 300 others started work under the supervision of Andrew Lawson to collect and document physical phenomena related to the quake . On 31 May 1906, the commission published a preliminary 17-page report titled "The Report of the State Earthquake Investigation Commission". The report included the bulk of the geological and morphological descriptions of the faulting, detailed reports on shaking intensity, as well as an impressive atlas of 40 oversized maps and folios. Nearly 100 years after its publication, the Commission Report remains a model for post-earthquake investigations. Because the diverse data sets were so complete and carefully documented, researchers continue to apply modern analysis techniques to learn from the 1906 earthquake. While the earthquake marked a seminal event in the history of California, it served as impetus for the birth of modern earthquake science in the United States.

  10. Earthquakes; January-February, 1979

    USGS Publications Warehouse

    Person, W.J.

    1979-01-01

    The first major earthquake (magnitude 7.0 to 7.9) of the year struck in southeastern Alaska in a sparsely populated area on February 28. On January 16, Iran experienced the first destructive earthquake of the year causing a number of casualties and considerable damage. Peru was hit by a destructive earthquake on February 16 that left casualties and damage. A number of earthquakes were experienced in parts of the Untied States, but only minor damage was reported. 

  11. Preliminary results on earthquake triggered landslides for the Haiti earthquake (January 2010)

    NASA Astrophysics Data System (ADS)

    van Westen, Cees; Gorum, Tolga

    2010-05-01

    This study presents the first results on an analysis of the landslides triggered by the Ms 7.0 Haiti earthquake that occurred on January 12, 2010 in the boundary region of the Pacific Plate and the North American plate. The fault is a left lateral strike slip fault with a clear surface expression. According to the USGS earthquake information the Enriquillo-Plantain Garden fault system has not produced any major earthquake in the last 100 years, and historical earthquakes are known from 1860, 1770, 1761, 1751, 1684, 1673, and 1618, though none of these has been confirmed in the field as associated with this fault. We used high resolution satellite imagery available for the pre and post earthquake situations, which were made freely available for the response and rescue operations. We made an interpretation of all co-seismic landslides in the epicentral area. We conclude that the earthquake mainly triggered landslide in the northern slope of the fault-related valley and in a number of isolated area. The earthquake apparently didn't trigger many visible landslides within the slum areas on the slopes in the southern part of Port-au-Prince and Carrefour. We also used ASTER DEM information to relate the landslide occurrences with DEM derivatives.

  12. Relationship Between Earthquake b-Values and Crustal Stresses in a Young Orogenic Belt

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Sean Kuanhsiang; Huang, Ting-Chung; Huang, Hsin-Hua; Chao, Wei-An; Koulakov, Ivan

    2018-02-01

    It has been reported that earthquake b-values decrease linearly with the differential stresses in the continental crust and subduction zones. Here we report a regression-derived relation between earthquake b-values and crustal stresses using the Anderson fault parameter (Aϕ) in a young orogenic belt of Taiwan. This regression relation is well established by using a large and complete earthquake catalog for Taiwan. The data set consists of b-values and Aϕ values derived from relocated earthquakes and focal mechanisms, respectively. Our results show that b-values decrease linearly with the Aϕ values at crustal depths with a high correlation coefficient of -0.9. Thus, b-values could be used as stress indicators for orogenic belts. However, the state of stress is relatively well correlated with the surface geological setting with respect to earthquake b-values in Taiwan. Temporal variations in the b-value could constitute one of the main reasons for the spatial heterogeneity of b-values. We therefore suggest that b-values could be highly sensitive to temporal stress variations.

  13. Turkish Compulsory Earthquake Insurance (TCIP)

    NASA Astrophysics Data System (ADS)

    Erdik, M.; Durukal, E.; Sesetyan, K.

    2009-04-01

    Through a World Bank project a government-sponsored Turkish Catastrophic Insurance Pool (TCIP) is created in 2000 with the essential aim of transferring the government's financial burden of replacing earthquake-damaged housing to international reinsurance and capital markets. Providing coverage to about 2.9 Million homeowners TCIP is the largest insurance program in the country with about 0.5 Billion USD in its own reserves and about 2.3 Billion USD in total claims paying capacity. The total payment for earthquake damage since 2000 (mostly small, 226 earthquakes) amounts to about 13 Million USD. The country-wide penetration rate is about 22%, highest in the Marmara region (30%) and lowest in the south-east Turkey (9%). TCIP is the sole-source provider of earthquake loss coverage up to 90,000 USD per house. The annual premium, categorized on the basis of earthquake zones type of structure, is about US90 for a 100 square meter reinforced concrete building in the most hazardous zone with 2% deductible. The earthquake engineering related shortcomings of the TCIP is exemplified by fact that the average rate of 0.13% (for reinforced concrete buildings) with only 2% deductible is rather low compared to countries with similar earthquake exposure. From an earthquake engineering point of view the risk underwriting (Typification of housing units to be insured, earthquake intensity zonation and the sum insured) of the TCIP needs to be overhauled. Especially for large cities, models can be developed where its expected earthquake performance (and consequently the insurance premium) can be can be assessed on the basis of the location of the unit (microzoned earthquake hazard) and basic structural attributes (earthquake vulnerability relationships). With such an approach, in the future the TCIP can contribute to the control of construction through differentiation of premia on the basis of earthquake vulnerability.

  14. The ``exceptional'' earthquake of 3 January 1117 in the Verona area (northern Italy): A critical time review and detection of two lost earthquakes (lower Germany and Tuscany)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Comastri, Alberto; Boschi, Enzo

    2005-12-01

    In the seismological literature the 3 January 1117 earthquake represents an interesting case study, both for the sheer size of the area in which that event is recorded by the monastic sources of the 12th century, and for the amount of damage mentioned. The 1117 event has been added to the earthquake catalogues of up to five European countries (Italy, France, Belgium, Switzerland, the Iberian peninsula), and it is the largest historical earthquake for northern Italy. We have analyzed the monastic time system in the 12th century and, by means of a comparative analysis of the sources, have correlated the two shocks mentioned (in the night and in the afternoon of 3 January) to territorial effects, seeking to make the overall picture reported for Europe more consistent. The connection between the linguistic indications and the localization of the effects has allowed us to shed light, with a reasonable degree of approximation, upon two previously little known earthquakes, probably generated by a sequence of events. A first earthquake in lower Germany (I0 (epicentral intensity) VII-VIII MCS (Mercalli, Cancani, Sieberg), M 6.4) preceded the far more violent one in northern Italy (Verona area) by about 12-13 hours. The second event is the one reported in the literature. We have put forward new parameters for this Veronese earthquake (I0 IX MCS, M 7.0). A third earthquake is independently recorded in the northwestern area of Tuscany (Imax VII-VIII MCS), but for the latter event the epicenter and magnitude cannot be evaluated.

  15. Sun, Moon and Earthquakes

    NASA Astrophysics Data System (ADS)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  16. Are Earthquakes Predictable? A Study on Magnitude Correlations in Earthquake Catalog and Experimental Data

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Ross, G.; Sammonds, P. R.

    2015-12-01

    The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

  17. Putting down roots in earthquake country-Your handbook for earthquakes in the Central United States

    USGS Publications Warehouse

    Contributors: Dart, Richard; McCarthy, Jill; McCallister, Natasha; Williams, Robert A.

    2011-01-01

    This handbook provides information to residents of the Central United States about the threat of earthquakes in that area, particularly along the New Madrid seismic zone, and explains how to prepare for, survive, and recover from such events. It explains the need for concern about earthquakes for those residents and describes what one can expect during and after an earthquake. Much is known about the threat of earthquakes in the Central United States, including where they are likely to occur and what can be done to reduce losses from future earthquakes, but not enough has been done to prepare for future earthquakes. The handbook describes such preparations that can be taken by individual residents before an earthquake to be safe and protect property.

  18. Important Earthquake Engineering Resources

    Science.gov Websites

    PEER logo Pacific Earthquake Engineering Research Center home about peer news events research Engineering Resources Site Map Search Important Earthquake Engineering Resources - American Concrete Institute Motion Observation Systems (COSMOS) - Consortium of Universities for Research in Earthquake Engineering

  19. Repeating Earthquakes Following an Mw 4.4 Earthquake Near Luther, Oklahoma

    NASA Astrophysics Data System (ADS)

    Clements, T.; Keranen, K. M.; Savage, H. M.

    2015-12-01

    An Mw 4.4 earthquake on April 16, 2013 near Luther, OK was one of the earliest M4+ earthquakes in central Oklahoma, following the Prague sequence in 2011. A network of four local broadband seismometers deployed within a day of the Mw 4.4 event, along with six Oklahoma netquake stations, recorded more than 500 aftershocks in the two weeks following the Luther earthquake. Here we use HypoDD (Waldhauser & Ellsworth, 2000) and waveform cross-correlation to obtain precise aftershock locations. The location uncertainty, calculated using the SVD method in HypoDD, is ~15 m horizontally and ~ 35 m vertically. The earthquakes define a near vertical, NE-SW striking fault plane. Events occur at depths from 2 km to 3.5 km within the granitic basement, with a small fraction of events shallower, near the sediment-basement interface. Earthquakes occur within a zone of ~200 meters thickness on either side of the best-fitting fault surface. We use an equivalency class algorithm to identity clusters of repeating events, defined as event pairs with median three-component correlation > 0.97 across common stations (Aster & Scott, 1993). Repeating events occur as doublets of only two events in over 50% of cases; overall, 41% of earthquakes recorded occur as repeating events. The recurrence intervals for the repeating events range from minutes to days, with common recurrence intervals of less than two minutes. While clusters occur in tight dimensions, commonly of 80 m x 200 m, aftershocks occur in 3 distinct ~2km x 2km-sized patches along the fault. Our analysis suggests that with rapidly deployed local arrays, the plethora of ~Mw 4 earthquakes occurring in Oklahoma and Southern Kansas can be used to investigate the earthquake rupture process and the role of damage zones.

  20. Earthquake Hazards.

    ERIC Educational Resources Information Center

    Donovan, Neville

    1979-01-01

    Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

  1. Atmospheric Baseline Monitoring Data Losses Due to the Samoa Earthquake

    NASA Astrophysics Data System (ADS)

    Schnell, R. C.; Cunningham, M. C.; Vasel, B. A.; Butler, J. H.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA) operates an Atmospheric Baseline Observatory at Cape Matatula on the north-eastern point of American Samoa, opened in 1973. The manned observatory conducts continuous measurements of a wide range of climate forcing and atmospheric composition data including greenhouse gas concentrations, solar radiation, CFC and HFC concentrations, aerosols and ozone as well as less frequent measurements of many other parameters. The onset of September 29, 2009 earthquake is clearly visible in the continuous data streams in a variety of ways. The station electrical generator came online when the Samoa power grid failed so instruments were powered during and subsequent to the earthquake. Some instruments ceased operation in a spurt of spurious data followed by silence. Other instruments just stopped sending data abruptly when the shaking from the earthquake broke a data or power links, or an integral part of the instrument was damaged. Others survived the shaking but were put out of calibration. Still others suffered damage after the earthquake as heaters ran uncontrolled or rotating shafts continued operating in a damaged environment grinding away until they seized up or chewed a new operating space. Some instruments operated as if there was no earthquake, others were brought back online within a few days. Many of the more complex (and in most cases, most expensive) instruments will be out of service, some for at least 6 months or more. This presentation will show these results and discuss the impact of the earthquake on long-term measurements of climate forcing agents and other critical climate measurements.

  2. Earthquakes, May-June 1981

    USGS Publications Warehouse

    Person, W.J.

    1981-01-01

    The months of May and June were somewhat quiet, seismically speaking. There was one major earthquake (7.0-7.9) off the west coast of South Island, New Zealand. The most destructive earthquake during this reporting period was in southern Iran on June 11 which caused fatalities and extensive damage. Peru also experienced a destructive earthquake on June 22 which caused fatalities and damage. In the United States, a number of earthquakes were experienced, but none caused significant damage. 

  3. ViscoSim Earthquake Simulator

    USGS Publications Warehouse

    Pollitz, Fred

    2012-01-01

    Synthetic seismicity simulations have been explored by the Southern California Earthquake Center (SCEC) Earthquake Simulators Group in order to guide long‐term forecasting efforts related to the Unified California Earthquake Rupture Forecast (Tullis et al., 2012a). In this study I describe the viscoelastic earthquake simulator (ViscoSim) of Pollitz, 2009. Recapitulating to a large extent material previously presented by Pollitz (2009, 2011) I describe its implementation of synthetic ruptures and how it differs from other simulators being used by the group.

  4. A Bayesian Analysis of the Post-seismic Deformation of the Great 11 March 2011 Tohoku-Oki (Mw 9.0) Earthquake: Implications for Future Earthquake Occurrence

    NASA Astrophysics Data System (ADS)

    Ortega Culaciati, F. H.; Simons, M.; Minson, S. E.; Owen, S. E.; Moore, A. W.; Hetland, E. A.

    2011-12-01

    We aim to quantify the spatial distribution of after-slip following the Great 11 March 2011 Tohoku-Oki (Mw 9.0) earthquake and its implications for the occurrence of a future Great Earthquake, particularly in the Ibaraki region of Japan. We use a Bayesian approach (CATMIP algorithm), constrained by on-land Geonet GPS time series, to infer models of after-slip to date in the Japan megathrust. Unlike traditional inverse methods, in which a single optimum model is found, the Bayesian approach allows a complete characterization of the model parameter space by searching a-posteriori estimates of the range of plausible models. We use the Kullback-Liebler information divergence as a metric of the information gain on each subsurface slip patch, to quantify the extent to which land-based geodetic observations can constrain the upper parts of the megathrust, where the Great Tohoku-Oki earthquake took place. We aim to understand the relationships of spatial distribution of fault slip behavior in the different stages of the seismic cycle. We compare our post-seismic slip distributions to inter- and co-seismic slip distributions obtained through a Bayesian methodology as well as through traditional (optimization) inverse estimates in the published literature. We discuss implications of these analyses for the occurrence of a large earthquake in the Japan megathrust regions adjacent to the Great Tohoku-Oki earthquake.

  5. Earthquakes, March-April, 1993

    USGS Publications Warehouse

    Person, Waverly J.

    1993-01-01

    Worldwide, only one major earthquake (7.0earthquake, a magnitude 7.2 shock, struck the Santa Cruz Islands region in the South Pacific on March 6. Earthquake-related deaths occurred in the Fiji Islands, China, and Peru.

  6. 2016 National Earthquake Conference

    Science.gov Websites

    Thank you to our Presenting Sponsor, California Earthquake Authority. What's New? What's Next ? What's Your Role in Building a National Strategy? The National Earthquake Conference (NEC) is a , state government leaders, social science practitioners, U.S. State and Territorial Earthquake Managers

  7. 2016 one-year seismic hazard forecast for the Central and Eastern United States from induced and natural earthquakes

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.

    2016-03-28

    The U.S. Geological Survey (USGS) has produced a 1-year seismic hazard forecast for 2016 for the Central and Eastern United States (CEUS) that includes contributions from both induced and natural earthquakes. The model assumes that earthquake rates calculated from several different time windows will remain relatively stationary and can be used to forecast earthquake hazard and damage intensity for the year 2016. This assessment is the first step in developing an operational earthquake forecast for the CEUS, and the analysis could be revised with updated seismicity and model parameters. Consensus input models consider alternative earthquake catalog durations, smoothing parameters, maximum magnitudes, and ground motion estimates, and represent uncertainties in earthquake occurrence and diversity of opinion in the science community. Ground shaking seismic hazard for 1-percent probability of exceedance in 1 year reaches 0.6 g (as a fraction of standard gravity [g]) in northern Oklahoma and southern Kansas, and about 0.2 g in the Raton Basin of Colorado and New Mexico, in central Arkansas, and in north-central Texas near Dallas. Near some areas of active induced earthquakes, hazard is higher than in the 2014 USGS National Seismic Hazard Model (NHSM) by more than a factor of 3; the 2014 NHSM did not consider induced earthquakes. In some areas, previously observed induced earthquakes have stopped, so the seismic hazard reverts back to the 2014 NSHM. Increased seismic activity, whether defined as induced or natural, produces high hazard. Conversion of ground shaking to seismic intensity indicates that some places in Oklahoma, Kansas, Colorado, New Mexico, Texas, and Arkansas may experience damage if the induced seismicity continues unabated. The chance of having Modified Mercalli Intensity (MMI) VI or greater (damaging earthquake shaking) is 5–12 percent per year in north-central Oklahoma and southern Kansas, similar to the chance of damage caused by natural earthquakes

  8. The music of earthquakes and Earthquake Quartet #1

    USGS Publications Warehouse

    Michael, Andrew J.

    2013-01-01

    Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.

  9. Earthquake triggering in southeast Africa following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo

    2018-02-01

    In this paper we present evidence of earthquake dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the rate of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean earthquake. We found evidence of triggered local earthquakes and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local earthquakes using β-statistics. Statistically significant dynamic triggering of local earthquakes was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie earthquake triggering.

  10. Historical earthquake research in Austria

    NASA Astrophysics Data System (ADS)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  11. Earthquake source properties of a shallow induced seismic sequence in SE Brazil

    NASA Astrophysics Data System (ADS)

    Agurto-Detzel, Hans; Bianchi, Marcelo; Prieto, Germán. A.; Assumpção, Marcelo

    2017-04-01

    We study source parameters of a cluster of 21 very shallow (<1 km depth) small-magnitude (Mw < 2) earthquakes induced by percolation of water by gravity in SE Brazil. Using a multiple empirical Green's functions (meGf) approach, we estimate seismic moments, corner frequencies, and static stress drops of these events by inversion of their spectral ratios. For the studied magnitude range (-0.3 < Mw < 1.9), we found an increase of stress drop with seismic moment. We assess associated uncertainties by considering different signal time windows and by performing a jackknife resampling of the spectral ratios. We also calculate seismic moments by full waveform inversion to independently validate our moments from spectral analysis. We propose repeated rupture on a fault patch at shallow depth, following continuous inflow of water, as the cause for the observed low absolute stress drop values (<1 MPa) and earthquake size dependency. To our knowledge, no other study on earthquake source properties of shallow events induced by water injection with no added pressure is available in the literature. Our study suggests that source parameter characterization may provide additional information of induced seismicity by hydraulic stimulation.

  12. Spatiotemporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach

    NASA Astrophysics Data System (ADS)

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    2017-07-01

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.

  13. Unraveling earthquake stresses: Insights from dynamically triggered and induced earthquakes

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Alfaro-Diaz, R. A.

    2017-12-01

    Induced seismicity, earthquakes caused by anthropogenic activity, has more than doubled in the last several years resulting from practices related to oil and gas production. Furthermore, large earthquakes have been shown to promote the triggering of other events within two fault lengths (static triggering), due to static stresses caused by physical movement along the fault, and also remotely from the passage of seismic waves (dynamic triggering). Thus, in order to understand the mechanisms for earthquake failure, we investigate regions where natural, induced, and dynamically triggered events occur, and specifically target Oklahoma. We first analyze data from EarthScope's USArray Transportable Array (TA) and local seismic networks implementing an optimized (STA/LTA) detector in order to develop local detection and earthquake catalogs. After we identify triggered events through statistical analysis, and perform a stress analysis to gain insight on the stress-states leading to triggered earthquake failure. We use our observations to determine the role of different transient stresses in contributing to natural and induced seismicity by comparing these stresses to regional stress orientation. We also delineate critically stressed regions of triggered seismicity that may indicate areas susceptible to earthquake hazards associated with sustained fluid injection in provinces of induced seismicity. Anthropogenic injection and extraction activity can alter the stress state and fluid flow within production basins. By analyzing the stress release of these ancient faults caused by dynamic stresses, we may be able to determine if fluids are solely responsible for increased seismic activity in induced regions.

  14. The Application of Speaker Recognition Techniques in the Detection of Tsunamigenic Earthquakes

    NASA Astrophysics Data System (ADS)

    Gorbatov, A.; O'Connell, J.; Paliwal, K.

    2015-12-01

    Tsunami warning procedures adopted by national tsunami warning centres largely rely on the classical approach of earthquake location, magnitude determination, and the consequent modelling of tsunami waves. Although this approach is based on known physics theories of earthquake and tsunami generation processes, this may be the main shortcoming due to the need to satisfy minimum seismic data requirement to estimate those physical parameters. At least four seismic stations are necessary to locate the earthquake and a minimum of approximately 10 minutes of seismic waveform observation to reliably estimate the magnitude of a large earthquake similar to the 2004 Indian Ocean Tsunami Earthquake of M9.2. Consequently the total time to tsunami warning could be more than half an hour. In attempt to reduce the time of tsunami alert a new approach is proposed based on the classification of tsunamigenic and non tsunamigenic earthquakes using speaker recognition techniques. A Tsunamigenic Dataset (TGDS) was compiled to promote the development of machine learning techniques for application to seismic trace analysis and, in particular, tsunamigenic event detection, and compare them to existing seismological methods. The TGDS contains 227 off shore events (87 tsunamigenic and 140 non-tsunamigenic earthquakes with M≥6) from Jan 2000 to Dec 2011, inclusive. A Support Vector Machine classifier using a radial-basis function kernel was applied to spectral features derived from 400 sec frames of 3-comp. 1-Hz broadband seismometer data. Ten-fold cross-validation was used during training to choose classifier parameters. Voting was applied to the classifier predictions provided from each station to form an overall prediction for an event. The F1 score (harmonic mean of precision and recall) was chosen to rate each classifier as it provides a compromise between type-I and type-II errors, and due to the imbalance between the representative number of events in the tsunamigenic and non

  15. Earthquakes Versus Surface Deformation: Qualitative and Quantitative Relationships From The Aegean

    NASA Astrophysics Data System (ADS)

    Pavlides, S.; Caputo, R.

    Historical seismicity of the Aegean Region has been revised in order to associate major earthquakes to specific seismogenic structures. Only earthquakes associated to normal faulting have been considered. All available historical and seismotectonic data relative to co-seismic surface faulting have been collected in order to evaluate the surface rup- ture length (SRL) and the maximum displacement (MD). In order to perform Seismic Hazard analyses, empirical relationships between these parameters and the magnitude have been inferred and the best fitting regression functions have been calculated. Both co-seismic fault rupture lengths and maximum displacements show a logarithmic re- lationships, but our data from the Aegean Region have systematically lower values than the same parameters world-wide though they are similar to those of the East- ern Mediterranean-Middle East region. The upper envelopes of our diagrams (SRL vs Mw and MD vs Mw) have been also estimated and discussed, because they give useful information of the wort-case scenarios; these curces will be also discussed. Further- more, geological and morphological criteria have been used to recognise the tectonic structures along which historical earthquakes occurred in order to define the geolog- ical fault length (GFL). Accordingly, the SRL/GFL ratio seems to have a bimodal distribution with a major peak about 0.8-1.0, indicating that several earthquakes break through almost the entire geological fault length, and a second peak around 0.5, re- lated to the possible segmentation of these major neotectonic faults. In contrast, no relationships can be depicted between the SRL/GFL ratio and the magnitude of the corresponding events.

  16. Multisclae heterogeneity of the 2011 Tohoku-oki earthquake by inversion

    NASA Astrophysics Data System (ADS)

    Aochi, H.; Ulrich, T.; Cornier, G.

    2012-12-01

    Earthquake fault heterogeneity is often studied on a set of subfaults in kinematic inversion, while it is sometimes described with spatially localized geometry. Aochi and Ide (EPS, 2011) and Ide and Aochi (submitted to Pageoph and AGU, 2012) apply a concept of multi-scale heterogeneity to simulate the dynamic rupture process of the 2011 Tohoku-oki earthquake, introducing circular patches of different dimension in fault fracture energy distribution. Previously the patches are given by the past moderate earthquakes in this region, and this seems to be consistent with the evolution process of this mega earthquake, although a few patches, in particular, the largest patch, had not been known previously. In this study, we try to identify patches by inversion. As demonstrated in several earthquakes including the 2010 Maule (M8.8) earthquake, it is possible to indentify two asperities of ellipse kinematically or dynamically (e.g. Ruiz and Madariaga, 2011, and so on). In the successful examples, different asperities are rather visible, separated in space. However the Tohoku-oki earthquake has hierarchical structure of heterogeneity. We apply the Genetic Algorithm to inverse the model parameters from the ground motions (K-net and Kik-net from NIED) and the high sampling GPS (GSI). Starting from low frequency ranges (> 50 seconds), we obtain an ellipse corresponding to M9 event located around the hypocenter, coherent with the previous result by Madariaga et al. (pers. comm.). However it is difficult to identify the second smaller with few constraints. This is mainly because the largest covers the entire rupture area and any smaller patch improves the fitting only for the closer stations. Again, this needs to introduce the multi-scale concept in inversion procedure. Instead of finding the largest one at first, we have to start to extract rather smaller moderate patches from the beginning of the record, following the rupture process.

  17. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  18. Source Analysis of Bucaramanga Nest Intermediate-Depth Earthquakes

    NASA Astrophysics Data System (ADS)

    Prieto, G. A.; Pedraza, P.; Dionicio, V.; Levander, A.

    2016-12-01

    Intermediate-depth earthquakes are those that occur at depths of 50 to 300 km in subducting lithosphere and can occasionally be destructive. Despite their ubiquity in earthquake catalogs, their physical mechanism remains unclear because ambient temperatures and pressures at such depths are expected to lead to ductile flow, rather than brittle failure, as a response to stress. Intermediate-depth seismicity rates vary substantially worldwide, even within a single subduction zone having highly clustered seismicity in some cases (Vrancea, Hindu-Kush, etc.). One such places in known as the Bucaramanga Nest (BN), one of the highest concentration of intermediate-depth earthquakes in the world. Previous work on these earthquakes has shown 1) Focal mechanisms vary substantially within a very small volume. 2) Radiation efficiency is small for M<5 events. 3) repeating and reverse polarity events are present. 4) Larger events show a complex behavior with two distinct rupture stages. Due to on-going efforts by the Colombian Geological Survey (SGC) to densify the national seismic network, it is now possible to better constrain the rupture behavior of these events. In our work we will present results from focal mechanisms based on waveform inversion as well as polarity and S/P amplitude ratios. These results will be contrasted to the detection and classification of repeating families. For the larger events we will determine source parameters and radiation efficiencies. Preliminary results show that reverse polarity events are present and that two main focal mechanisms, with their corresponding reverse polarity events are dominant. Our results have significant implications in our understanding of intermedaite-depth earthquakes and the stress conditions that are responsible for this unusual cluster of seismicity.

  19. Mechanical and Statistical Evidence of Human-Caused Earthquakes - A Global Data Analysis

    NASA Astrophysics Data System (ADS)

    Klose, C. D.

    2012-12-01

    The causality of large-scale geoengineering activities and the occurrence of earthquakes with magnitudes of up to M=8 is discussed and mechanical and statistical evidence is provided. The earthquakes were caused by artificial water reservoir impoundments, underground and open-pit mining, coastal management, hydrocarbon production and fluid injections/extractions. The presented global earthquake catalog has been recently published in the Journal of Seismology and is available for the public at www.cdklose.com. The data show evidence that geomechanical relationships exist with statistical significance between a) seismic moment magnitudes of observed earthquakes, b) anthropogenic mass shifts on the Earth's crust, and c) lateral distances of the earthquake hypocenters to the locations of the mass shifts. Research findings depend on uncertainties, in particular, of source parameter estimations of seismic events before instrumental recoding. First analyses, however, indicate that that small- to medium size earthquakes (M6) tend to be triggered. The rupture propagation of triggered events might be dominated by pre-existing tectonic stress conditions. Besides event specific evidence, large earthquakes such as China's 2008 M7.9 Wenchuan earthquake fall into a global pattern and can not be considered as outliers or simply seen as an act of god. Observations also indicate that every second seismic event tends to occur after a decade, while pore pressure diffusion seems to only play a role when injecting fluids deep underground. The chance of an earthquake to nucleate after two or 20 years near an area with a significant mass shift is 25% or 75% respectively. Moreover, causative effects of seismic activities highly depend on the tectonic stress regime in the Earth's crust in which geoengineering takes place.

  20. Study on safety level of RC beam bridges under earthquake

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia

    2017-08-01

    This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.

  1. Earthquake triggering by seismic waves following the landers and hector mine earthquakes

    USGS Publications Warehouse

    Gomberg, J.; Reasenberg, P.A.; Bodin, P.; Harris, R.A.

    2001-01-01

    The proximity and similarity of the 1992, magnitude 7.3 Landers and 1999, magnitude 7.1 Hector Mine earthquakes in California permit testing of earthquake triggering hypotheses not previously possible. The Hector Mine earthquake confirmed inferences that transient, oscillatory 'dynamic' deformations radiated as seismic waves can trigger seismicity rate increases, as proposed for the Landers earthquake1-6. Here we quantify the spatial and temporal patterns of the seismicity rate changes7. The seismicity rate increase was to the north for the Landers earthquake and primarily to the south for the Hector Mine earthquake. We suggest that rupture directivity results in elevated dynamic deformations north and south of the Landers and Hector Mine faults, respectively, as evident in the asymmetry of the recorded seismic velocity fields. Both dynamic and static stress changes seem important for triggering in the near field with dynamic stress changes dominating at greater distances. Peak seismic velocities recorded for each earthquake suggest the existence of, and place bounds on, dynamic triggering thresholds. These thresholds vary from a few tenths to a few MPa in most places, depend on local conditions, and exceed inferred static thresholds by more than an order of magnitude. At some sites, the onset of triggering was delayed until after the dynamic deformations subsided. Physical mechanisms consistent with all these observations may be similar to those that give rise to liquefaction or cyclic fatigue.

  2. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability

  3. Analysis of pre-earthquake ionospheric anomalies before the global M = 7.0+ earthquakes in 2010

    NASA Astrophysics Data System (ADS)

    Yao, Y. B.; Chen, P.; Zhang, S.; Chen, J. J.; Yan, F.; Peng, W. F.

    2012-03-01

    The pre-earthquake ionospheric anomalies that occurred before the global M = 7.0+ earthquakes in 2010 are investigated using the total electron content (TEC) from the global ionosphere map (GIM). We analyze the possible causes of the ionospheric anomalies based on the space environment and magnetic field status. Results show that some anomalies are related to the earthquakes. By analyzing the time of occurrence, duration, and spatial distribution of these ionospheric anomalies, a number of new conclusions are drawn, as follows: earthquake-related ionospheric anomalies are not bound to appear; both positive and negative anomalies are likely to occur; and the earthquake-related ionospheric anomalies discussed in the current study occurred 0-2 days before the associated earthquakes and in the afternoon to sunset (i.e. between 12:00 and 20:00 local time). Pre-earthquake ionospheric anomalies occur mainly in areas near the epicenter. However, the maximum affected area in the ionosphere does not coincide with the vertical projection of the epicenter of the subsequent earthquake. The directions deviating from the epicenters do not follow a fixed rule. The corresponding ionospheric effects can also be observed in the magnetically conjugated region. However, the probability of the anomalies appearance and extent of the anomalies in the magnetically conjugated region are smaller than the anomalies near the epicenter. Deep-focus earthquakes may also exhibit very significant pre-earthquake ionospheric anomalies.

  4. Modeling, Forecasting and Mitigating Extreme Earthquakes

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  5. Extending earthquakes' reach through cascading.

    PubMed

    Marsan, David; Lengliné, Olivier

    2008-02-22

    Earthquakes, whatever their size, can trigger other earthquakes. Mainshocks cause aftershocks to occur, which in turn activate their own local aftershock sequences, resulting in a cascade of triggering that extends the reach of the initial mainshock. A long-lasting difficulty is to determine which earthquakes are connected, either directly or indirectly. Here we show that this causal structure can be found probabilistically, with no a priori model nor parameterization. Large regional earthquakes are found to have a short direct influence in comparison to the overall aftershock sequence duration. Relative to these large mainshocks, small earthquakes collectively have a greater effect on triggering. Hence, cascade triggering is a key component in earthquake interactions.

  6. An application of synthetic seismicity in earthquake statistics - The Middle America Trench

    NASA Technical Reports Server (NTRS)

    Ward, Steven N.

    1992-01-01

    The way in which seismicity calculations which are based on the concept of fault segmentation incorporate the physics of faulting through static dislocation theory can improve earthquake recurrence statistics and hone the probabilities of hazard is shown. For the Middle America Trench, the spread parameters of the best-fitting lognormal or Weibull distributions (about 0.75) are much larger than the 0.21 intrinsic spread proposed in the Nishenko Buland (1987) hypothesis. Stress interaction between fault segments disrupts time or slip predictability and causes earthquake recurrence to be far more aperiodic than has been suggested.

  7. Earthquake Catalogue of the Caucasus

    NASA Astrophysics Data System (ADS)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  8. Earthquakes, July-August 1992

    USGS Publications Warehouse

    Person, W.J.

    1992-01-01

    There were two major earthquakes (7.0≤M<8.0) during this reporting period. A magnitude 7.5 earthquake occurred in Kyrgyzstan on August 19 and a magnitude 7.0 quake struck the Ascension Island region on August 28. In southern California, aftershocks of the magnitude 7.6 earthquake on June 28, 1992, continued. One of these aftershocks caused damage and injuries, and at least one other aftershock caused additional damage. Earthquake-related fatalities were reportred in Kyrgzstan and Pakistan. 

  9. Laboratory-based maximum slip rates in earthquake rupture zones and radiated energy

    USGS Publications Warehouse

    McGarr, A.; Fletcher, Joe B.; Boettcher, M.; Beeler, N.; Boatwright, J.

    2010-01-01

    Laboratory stick-slip friction experiments indicate that peak slip rates increase with the stresses loading the fault to cause rupture. If this applies also to earthquake fault zones, then the analysis of rupture processes is simplified inasmuch as the slip rates depend only on the local yield stress and are independent of factors specific to a particular event, including the distribution of slip in space and time. We test this hypothesis by first using it to develop an expression for radiated energy that depends primarily on the seismic moment and the maximum slip rate. From laboratory results, the maximum slip rate for any crustal earthquake, as well as various stress parameters including the yield stress, can be determined based on its seismic moment and the maximum slip within its rupture zone. After finding that our new equation for radiated energy works well for laboratory stick-slip friction experiments, we used it to estimate radiated energies for five earthquakes with magnitudes near 2 that were induced in a deep gold mine, an M 2.1 repeating earthquake near the San Andreas Fault Observatory at Depth (SAFOD) site and seven major earthquakes in California and found good agreement with energies estimated independently from spectra of local and regional ground-motion data. Estimates of yield stress for the earthquakes in our study range from 12 MPa to 122 MPa with a median of 64 MPa. The lowest value was estimated for the 2004 M 6 Parkfield, California, earthquake whereas the nearby M 2.1 repeating earthquake, as recorded in the SAFOD pilot hole, showed a more typical yield stress of 64 MPa.

  10. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    NASA Astrophysics Data System (ADS)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    of the rupture extent and dimensions, but not necessarily the strike. We found that using standard earthquake catalogs, such as the National Earthquake Information Center catalog, we can constrain the rupture extent, rupture direction, and in many cases the type of faulting, of the mainshock with the aftershocks that occur within the first hour after the mainshock. However, this data may not be currently available in near real-time. Since our results show that these early aftershock locations may be used to estimate first order rupture parameters for large global earthquakes, the near real-time availability of these data would be useful for fast earthquake damage assessment.

  11. Appraising the Early-est earthquake monitoring system for tsunami alerting at the Italian Candidate Tsunami Service Provider

    NASA Astrophysics Data System (ADS)

    Bernardi, F.; Lomax, A.; Michelini, A.; Lauciani, V.; Piatanesi, A.; Lorito, S.

    2015-09-01

    In this paper we present and discuss the performance of the procedure for earthquake location and characterization implemented in the Italian Candidate Tsunami Service Provider at the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Rome. Following the ICG/NEAMTWS guidelines, the first tsunami warning messages are based only on seismic information, i.e., epicenter location, hypocenter depth, and magnitude, which are automatically computed by the software Early-est. Early-est is a package for rapid location and seismic/tsunamigenic characterization of earthquakes. The Early-est software package operates using offline-event or continuous-real-time seismic waveform data to perform trace processing and picking, and, at a regular report interval, phase association, event detection, hypocenter location, and event characterization. Early-est also provides mb, Mwp, and Mwpd magnitude estimations. mb magnitudes are preferred for events with Mwp ≲ 5.8, while Mwpd estimations are valid for events with Mwp ≳ 7.2. In this paper we present the earthquake parameters computed by Early-est between the beginning of March 2012 and the end of December 2014 on a global scale for events with magnitude M ≥ 5.5, and we also present the detection timeline. We compare the earthquake parameters automatically computed by Early-est with the same parameters listed in reference catalogs. Such reference catalogs are manually revised/verified by scientists. The goal of this work is to test the accuracy and reliability of the fully automatic locations provided by Early-est. In our analysis, the epicenter location, hypocenter depth and magnitude parameters do not differ significantly from the values in the reference catalogs. Both mb and Mwp magnitudes show differences to the reference catalogs. We thus derived correction functions in order to minimize the differences and correct biases between our values and the ones from the reference catalogs. Correction of the Mwp

  12. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  13. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  14. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the

  15. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  16. Modeling of earthquake ground motion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  17. A catalog of coseismic uniform-slip models of geodetically unstudied earthquakes along the Sumatran plate boundary

    NASA Astrophysics Data System (ADS)

    Wong, N. Z.; Feng, L.; Hill, E.

    2017-12-01

    The Sumatran plate boundary has experienced five Mw > 8 great earthquakes, a handful of Mw 7-8 earthquakes and numerous small to moderate events since the 2004 Mw 9.2 Sumatra-Andaman earthquake. The geodetic studies of these moderate earthquakes have mostly been passed over in favour of larger events. We therefore in this study present a catalog of coseismic uniform-slip models of one Mw 7.2 earthquake and 17 Mw 5.9-6.9 events that have mostly gone geodetically unstudied. These events occurred close to various continuous stations within the Sumatran GPS Array (SuGAr), allowing the network to record their surface deformation. However, due to their relatively small magnitudes, most of these moderate earthquakes were recorded by only 1-4 GPS stations. With the limited observations per event, we first constrain most of the model parameters (e.g. location, slip, patch size, strike, dip, rake) using various external sources (e.g., the ANSS catalog, gCMT, Slab1.0, and empirical relationships). We then use grid-search forward models to explore a range of some of these parameters (geographic position for all events and additionally depth for some events). Our results indicate the gCMT centroid locations in the Sumatran subduction zone might be biased towards the west for smaller events, while ANSS epicentres might be biased towards the east. The more accurate locations of these events are potentially useful in understanding the nature of various structures along the megathrust, particularly the persistent rupture barriers.

  18. Quantified sensitivity of lakes to record historic earthquakes: Implications for paleoseismology

    NASA Astrophysics Data System (ADS)

    Wilhelm, Bruno; Nomade, Jerome; Crouzet, Christian; Litty, Camille; Belle, Simon; Rolland, Yann; Revel, Marie; Courboulex, Françoise; Arnaud, Fabien; Anselmetti, Flavio S.

    2015-04-01

    Seismic hazard assessment is a challenging issue for modern societies. A key parameter to be estimated is the recurrence interval of damaging earthquakes. In moderately active seismo-tectonic regions, this requires the establishment of earthquake records long enough to be relevant, i.e. far longer than historical observations. Here we investigate how lake sediments can be used for this purpose and quantify the conditions that enable earthquake recording. For this purpose, (i) we studied nine lake-sediment sequences to reconstruct mass-movement chronicles in different settings of the French Alpine range and (ii) we compared the chronicles to the well-documented earthquake history over the last five centuries. The studied lakes are all small alpine-type lakes based directly on bedrock. All lake sequences have been studied following the same methodology; (i) a multi-core approach to well understand the sedimentary processes within the lake basins, (ii) a high-resolution lithological and grain-size characterization and (iii) a dating based on short-lived radionuclide measurements, lead contaminations and radiocarbon ages. We identified 40 deposits related to 26 mass-movement (MM) occurrences. 46% (12 on 26) of the MMs are synchronous in neighbouring lakes, supporting strongly an earthquake origin. In addition, the good agreement between MMs ages and historical earthquake dates suggests an earthquake trigger for 88% (23 on 26) of them. Related epicenters are always located at distances of less than 100 km from the lakes and their epicentral MSK intensity ranges between VII and IX. However, the number of earthquake-triggered MMs varies between lakes of a same region, suggesting a gradual sensitivity of the lake sequences towards earthquake shaking, i.e. distinct lake-sediment slope stabilities. The quantification of this earthquake sensitivity and the comparison to the lake system and sediment characteristics suggest that the primary factor explaining this variability is

  19. Leveraging geodetic data to reduce losses from earthquakes

    USGS Publications Warehouse

    Murray, Jessica R.; Roeloffs, Evelyn A.; Brooks, Benjamin A.; Langbein, John O.; Leith, William S.; Minson, Sarah E.; Svarc, Jerry L.; Thatcher, Wayne R.

    2018-04-23

    event response products and by expanded use of geodetic imaging data to assess fault rupture and source parameters.Uncertainties in the NSHM, and in regional earthquake models, are reduced by fully incorporating geodetic data into earthquake probability calculations.Geodetic networks and data are integrated into the operations and earthquake information products of the Advanced National Seismic System (ANSS).Earthquake early warnings are improved by more rapidly assessing ground displacement and the dynamic faulting process for the largest earthquakes using real-time geodetic data.Methodology for probabilistic earthquake forecasting is refined by including geodetic data when calculating evolving moment release during aftershock sequences and by better understanding the implications of transient deformation for earthquake likelihood.A geodesy program that encompasses a balanced mix of activities to sustain missioncritical capabilities, grows new competencies through the continuum of fundamental to applied research, and ensures sufficient resources for these endeavors provides a foundation by which the EHP can be a leader in the application of geodesy to earthquake science. With this in mind the following objectives provide a framework to guide EHP efforts:Fully utilize geodetic information to improve key products, such as the NSHM and EEW, and to address new ventures like the USGS Subduction Zone Science Plan.Expand the variety, accuracy, and timeliness of post-earthquake information products, such as PAGER (Prompt Assessment of Global Earthquakes for Response), through incorporation of geodetic observations.Determine if geodetic measurements of transient deformation can significantly improve estimates of earthquake probability.Maintain an observational strategy aligned with the target outcomes of this document that includes continuous monitoring, recording of ephemeral observations, focused data collection for use in research, and application-driven data processing and

  20. Earthquake and Tsunami booklet based on two Indonesia earthquakes

    NASA Astrophysics Data System (ADS)

    Hayashi, Y.; Aci, M.

    2014-12-01

    Many destructive earthquakes occurred during the last decade in Indonesia. These experiences are very important precepts for the world people who live in earthquake and tsunami countries. We are collecting the testimonies of tsunami survivors to clarify successful evacuation process and to make clear the characteristic physical behaviors of tsunami near coast. We research 2 tsunami events, 2004 Indian Ocean tsunami and 2010 Mentawai slow earthquake tsunami. Many video and photographs were taken by people at some places in 2004 Indian ocean tsunami disaster; nevertheless these were few restricted points. We didn't know the tsunami behavior in another place. In this study, we tried to collect extensive information about tsunami behavior not only in many places but also wide time range after the strong shake. In Mentawai case, the earthquake occurred in night, so there are no impressive photos. To collect detail information about evacuation process from tsunamis, we contrived the interview method. This method contains making pictures of tsunami experience from the scene of victims' stories. In 2004 Aceh case, all survivors didn't know tsunami phenomena. Because there were no big earthquakes with tsunami for one hundred years in Sumatra region, public people had no knowledge about tsunami. This situation was highly improved in 2010 Mentawai case. TV programs and NGO or governmental public education programs about tsunami evacuation are widespread in Indonesia. Many people know about fundamental knowledge of earthquake and tsunami disasters. We made drill book based on victim's stories and painted impressive scene of 2 events. We used the drill book in disaster education event in school committee of west Java. About 80 % students and teachers evaluated that the contents of the drill book are useful for correct understanding.