Science.gov

Sample records for large scale earthquakes

  1. Scaling differences between large interplate and intraplate earthquakes

    NASA Technical Reports Server (NTRS)

    Scholz, C. H.; Aviles, C. A.; Wesnousky, S. G.

    1985-01-01

    A study of large intraplate earthquakes with well determined source parameters shows that these earthquakes obey a scaling law similar to large interplate earthquakes, in which M sub o varies as L sup 2 or u = alpha L where L is rupture length and u is slip. In contrast to interplate earthquakes, for which alpha approximately equals 1 x .00001, for the intraplate events alpha approximately equals 6 x .0001, which implies that these earthquakes have stress-drops about 6 times higher than interplate events. This result is independent of focal mechanism type. This implies that intraplate faults have a higher frictional strength than plate boundaries, and hence, that faults are velocity or slip weakening in their behavior. This factor may be important in producing the concentrated deformation that creates and maintains plate boundaries.

  2. Large scale simulations of the great 1906 San Francisco earthquake

    NASA Astrophysics Data System (ADS)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  3. A geometric frequency-magnitude scaling transition: Measuring b = 1.5 for large earthquakes

    NASA Astrophysics Data System (ADS)

    Yoder, Mark R.; Holliday, James R.; Turcotte, Donald L.; Rundle, John B.

    2012-04-01

    We identify two distinct scaling regimes in the frequency-magnitude distribution of global earthquakes. Specifically, we measure the scaling exponent b = 1.0 for "small" earthquakes with 5.5 < m < 7.6 and b = 1.5 for "large" earthquakes with 7.6 < m < 9.0. This transition at mt = 7.6, can be explained by geometric constraints on the rupture. In conjunction with supporting literature, this corroborates theories in favor of fully self-similar and magnitude independent earthquake physics. We also show that the scaling behavior and abrupt transition between the scaling regimes imply that earthquake ruptures have compact shapes and smooth rupture-fronts.

  4. Reduce seismic design conservatism through large-scale earthquake experiments

    SciTech Connect

    Tang, H.T.; Stepp, J.C. )

    1992-01-01

    For structures founded on soil deposits, the interaction between the soil and the structure caused by incident seismic waves modifies the foundation input motion and the dynamic characteristics of the soil-structure system. This paper reports that as a result, soil-structure interaction (SSI) plays a critical role in the design of nuclear plant structures. Recognizing that experimental validation and quantification is required, two scaled cylindrical reinforced-concrete containment models (1/4-scale and 1/12-scale of typical full-scale reactor containments) were constructed in Lotung, an active seismic region in Taiwan. Forced vibration tests (FBT) were also conducted to characterize the dynamic behavior of the soil-structure system. Based on these data, a series of round-robin blind prediction and post-test correlation analyses using various currently-available SSI methods were performed.

  5. Earthquake triggering and large-scale geologic storage of carbon dioxide

    PubMed Central

    Zoback, Mark D.; Gorelick, Steven M.

    2012-01-01

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO2 emissions associated with coal-based electrical power generation and other industrial sources of CO2 [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185–5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO2 into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO2 repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  6. Earthquake triggering and large-scale geologic storage of carbon dioxide.

    PubMed

    Zoback, Mark D; Gorelick, Steven M

    2012-06-26

    Despite its enormous cost, large-scale carbon capture and storage (CCS) is considered a viable strategy for significantly reducing CO(2) emissions associated with coal-based electrical power generation and other industrial sources of CO(2) [Intergovernmental Panel on Climate Change (2005) IPCC Special Report on Carbon Dioxide Capture and Storage. Prepared by Working Group III of the Intergovernmental Panel on Climate Change, eds Metz B, et al. (Cambridge Univ Press, Cambridge, UK); Szulczewski ML, et al. (2012) Proc Natl Acad Sci USA 109:5185-5189]. We argue here that there is a high probability that earthquakes will be triggered by injection of large volumes of CO(2) into the brittle rocks commonly found in continental interiors. Because even small- to moderate-sized earthquakes threaten the seal integrity of CO(2) repositories, in this context, large-scale CCS is a risky, and likely unsuccessful, strategy for significantly reducing greenhouse gas emissions. PMID:22711814

  7. Large scale dynamic rupture scenario of the 2004 Sumatra-Andaman megathrust earthquake

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas; Madden, Elizabeth H.; Wollherr, Stephanie; Gabriel, Alice A.

    2016-04-01

    The Great Sumatra-Andaman earthquake of 26 December 2004 is one of the strongest and most devastating earthquakes in recent history. Most of the damage and the ~230,000 fatalities were caused by the tsunami generated by the Mw 9.1-9.3 event. Various finite-source models of the earthquake have been proposed, but poor near-field observational coverage has led to distinct differences in source characterization. Even the fault dip angle and depth extent are subject to debate. We present a physically realistic dynamic rupture scenario of the earthquake using state-of-the-art numerical methods and seismotectonic data. Due to the lack of near-field observations, our setup is constrained by the overall characteristics of the rupture, including the magnitude, propagation speed, and extent along strike. In addition, we incorporate the detailed geometry of the subducting fault using Slab1.0 to the south and aftershock locations to the north, combined with high-resolution topography and bathymetry data.The possibility of inhomogeneous background stress, resulting from the curved shape of the slab along strike and the large fault dimensions, is discussed. The possible activation of thrust faults splaying off the megathrust in the vicinity of the hypocenter is also investigated. Dynamic simulation of this 1300 to 1500 km rupture is a computational and geophysical challenge. In addition to capturing the large-scale rupture, the simulation must resolve the process zone at the rupture tip, whose characteristic length is comparable to smaller earthquakes and which shrinks with propagation distance. Thus, the fault must be finely discretised. Moreover, previously published inversions agree on a rupture duration of ~8 to 10 minutes, suggesting an overall slow rupture speed. Hence, both long temporal scales and large spatial dimensions must be captured. We use SeisSol, a software package based on an ADER-DG scheme solving the spontaneous dynamic earthquake rupture problem with high

  8. DYNAMIC BEHAVIOR OF CONCRETE GRAVITY DAM ON JOINTED ROCK FOUNDATION DURING LARGE-SCALE EARTHQUAKE

    NASA Astrophysics Data System (ADS)

    Kimata, Hiroyuki; Fujita, Yutaka; Horii, Hideyuki; Yazdani, Mahmoud

    Dynamic cracking analysis of concrete gravity dam has been carried out during large-scale earthquake, considering the progressive failure of jointed rock foundation. Firstly, in order to take into account the progressive failure of rock foundation, the constitutive law of jointed rock is assumed and its validity is evaluated by simulation analysis based on the past experimental model. Finally, dynamic cracking analysis of 100-m high dam model is performed, using the previously proposed approach with tangent stiffness-proportional damping to express the propagation behavior of crack and the constitutive law of jointed rock. The crack propagation behavior of dam body and the progressive failure of jointed rock foundation are investigated.

  9. Optimization and Scalability of an Large-scale Earthquake Simulation Application

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Olsen, K. B.; Hu, Y.; Day, S.; Dalguer, L. A.; Minster, B.; Moore, R.; Zhu, J.; Maechling, P.; Jordan, T.

    2006-12-01

    In 2004, the Southern California Earthquake Center (SCEC) initiated a major large-scale earthquake simulation, called TeraShake. The TeraShake propagated seismic waves across a domain of 600 km by 300 km by 80 km at 200 meter resolution and 1.8 billion grid points, some of the largest and most detailed earthquake simulations of the southern San Andres fault. The TeraShake 1 code is based on a 4th order FD Anelastic Wave Propagation Model (AWM), developed by K. Olsen, using a kinematic source description. The enhanced TeraShake 2 then added a new physics-based dynamic component, with the new capability to very- large scale earthquake simulations. A high 100 m resolution was used to generate a physically realistic earthquake source description for the San Andreas fault. The executions of very-large scale TeraShake 2 simulations with the high-resolution dynamic source used up to 1024 processors on the TeraGrid, adding more than 60 TB of simulation output in the 168 TB SCEC digital library, managed by the SDSC Storage Resource Broker (SRB) at SDSC. The execution of these large simulations requires high levels of expertise and resource coordination. We examine the lessons learned in enabling the execution of the TeraShake application. In particular, we look at challenges imposed for the single-processor optimization of the application performance, optimization of the I/O handling and optimization of the run initialization, and the execution of the data-intensive simulations. The TeraShake code was optimized to improve scalability to 2048 processors, with a parallel efficiency of 84%. Our latest TeraShake simulation sustains 1 Teraflop/s performance, completing a simulation in less than 9 hours on the SDSC Datastar. This is more than 10 times faster than previous TeraShake simulations. Some of the TeraShake production simulations were carried out using grid computing resources, including the execution on NCSA TeraGrid resources, and run-time archiving outputs onto SDSC

  10. Using Speculative Execution to Reduce Communication in a Parallel Large Scale Earthquake Simulation

    NASA Astrophysics Data System (ADS)

    Heien, E. M.; Yikilmaz, M. B.; Sachs, M. K.; Rundle, J. B.; Turcotte, D. L.; Kellogg, L. H.

    2011-12-01

    Earthquake simulations on parallel systems can be communication intensive due to local events (rupture waves) which have global effects (stress transfer). These events require global communication to transmit the effects of increased stress to model elements on other computing nodes. We describe a method of using speculative execution in a large scale parallel computation to decrease communication and improve simulation speed. This method exploits the tendency of earthquake ruptures to remain physically localized even though their effects on stress will be over long ranges. In this method we assume the stress transfer caused by a rupture remains localized and avoid global communication until the rupture has a high probability of passing to another node. We then calculate the stress state of the system to ensure that the rupture in fact remained localized, proceeding if the assumption was correct or rolling back the calculation otherwise. Using this method we are able to reduce communication frequency by 78% percent, in turn decreasing communication time by up to 66% and improving simulation speed by up to 45%.

  11. Earthquake Source Simulations: A Coupled Numerical Method and Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Ely, G. P.; Xin, Q.; Faerman, M.; Day, S.; Minster, B.; Kremenek, G.; Moore, R.

    2003-12-01

    We investigate a scheme for interfacing Finite-Difference (FD) and Finite-Element (FE) models in order to simulate dynamic earthquake rupture. The more powerful but slower FE method allows for (1) unusual geometries (e.g. dipping and curved faults), (2) nonlinear physics, and (3) finite displacements. These capabilities are computationally expensive and limit the useful size of the problem that can be solved. Large efficiencies are gained by employing FE only where necessary in the near source region and coupling this with an efficient FD solution for the surrounding medium. Coupling is achieved through setting up and an overlapping buffer zone between the domains modeled by the two methods. The buffer zone is handled numerically as a set of mutual offset boundary conditions. This scheme eliminates the effect of the artificial boundaries at the interface and allows energy to propagate in both directions across the boundary. In general it is necessary to interpolate variables between the meshes and time discretizations used for each model, and this can create artifacts that must be controlled. A modular approach has been used in which either of the two component codes can be substituted with another code. We have successfully demonstrated coupling for a simulation between a second-order FD rupture dynamics code and fourth-order staggered-grid FD code. To be useful earthquake source models must capture a large range of length and time scales, which is very computationally demanding. This requires that (for current computer technology) codes must utilize parallel processing. Additionally, if larges quantities of output data are to be saved, a high performance data management system is desirable. We show results from a large scale rupture dynamics simulation designed to test these capabilities. We use second-order FD with dimensions of 400 x 800 x 800 nodes, run for 3000 time steps. Data were saved for the entire volume for three components of velocity at every time

  12. Reconsidering earthquake scaling

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Wech, A.; Creager, K.; Obara, K.; Agnew, D.

    2016-06-01

    The relationship (scaling) between scalar moment, M0, and duration, T, potentially provides key constraints on the physics governing fault slip. The prevailing interpretation of M0-T observations proposes different scaling for fast (earthquakes) and slow (mostly aseismic) slip populations and thus fundamentally different driving mechanisms. We show that a single model of slip events within bounded slip zones may explain nearly all fast and slow slip M0-T observations, and both slip populations have a change in scaling, where the slip area growth changes from 2-D when too small to sense the boundaries to 1-D when large enough to be bounded. We present new fast and slow slip M0-T observations that sample the change in scaling in each population, which are consistent with our interpretation. We suggest that a continuous but bimodal distribution of slip modes exists and M0-T observations alone may not imply a fundamental difference between fast and slow slip.

  13. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.; Ruppert, S.

    2002-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by analyzing aftershock sequences in the Western U.S. and Turkey using two different techniques. First we examine the observed regional S-wave spectra by fitting with a parametric model (Walter and Taylor, 2002) with and without variable stress drop scaling. Because the aftershock sequences have common stations and paths we can examine the S-wave spectra of events by size to determine what type of apparent stress scaling, if any, is most consistent with the data. Second we use regional coda envelope techniques (e.g. Mayeda and Walter, 1996; Mayeda et al, 2002) on the same events to directly measure energy and moment. The coda techniques corrects for path and site effects using an empirical Green function technique and independent calibration with surface wave derived moments. Our hope is that by carefully analyzing a very large number of events in a consistent manner using two different techniques we can start to resolve this apparent stress scaling issue. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  14. Large-scale mapping of landslides in the epicentral area Loma Prieta earthquake of October 17, 1989, Santa Cruz County

    SciTech Connect

    Spittler, T.E.; Sydnor, R.H.; Manson, M.W.; Levine, P.; McKittrick, M.M.

    1990-01-01

    The Loma Prieta earthquake of October 17, 1989 triggered landslides throughout the Santa Cruz Mountains in central California. The California Department of Conservation, Division of Mines and Geology (DMG) responded to a request for assistance from the County of Santa Cruz, Office of Emergency Services to evaluate the geologic hazard from major reactivated large landslides. DMG prepared a set of geologic maps showing the landslide features that resulted from the October 17 earthquake. The principal purpose of large-scale mapping of these landslides is: (1) to provide county officials with regional landslide information that can be used for timely recovery of damaged areas; (2) to identify disturbed ground which is potentially vulnerable to landslide movement during winter rains; (3) to provide county planning officials with timely geologic information that will be used for effective land-use decisions; (4) to document regional landslide features that may not otherwise be available for individual site reconstruction permits and for future development.

  15. Earthquake Apparent Stress Scaling

    NASA Astrophysics Data System (ADS)

    Mayeda, K.; Walter, W. R.

    2003-04-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of recent papers finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Another set of recent papers finds the apparent stress increases with magnitude (e.g. Kanamori et al., 1993 Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We have just started a project to reexamine this issue by applying the same methodology to a series of datasets that spans roughly 10 orders in seismic moment, M0. We will summarize recent results using a coda envelope methodology of Mayeda et al, (2003) which provide the most stable source spectral estimates to date. This methodology eliminates the complicating effects of lateral path heterogeneity, source radiation pattern, directivity, and site response (e.g., amplification, f-max and kappa). We find that in tectonically active continental crustal areas the total radiated energy scales as M00.25 whereas in regions of relatively younger oceanic crust, the stress drop is generally lower and exhibits a 1-to-1 scaling with moment. In addition to answering a fundamental question in earthquake source dynamics, this study addresses how one would scale small earthquakes in a particular region up to a future, more damaging earthquake. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.

  16. Aftershocks of Chile's Earthquake for an Ongoing, Large-Scale Experimental Evaluation

    ERIC Educational Resources Information Center

    Moreno, Lorenzo; Trevino, Ernesto; Yoshikawa, Hirokazu; Mendive, Susana; Reyes, Joaquin; Godoy, Felipe; Del Rio, Francisca; Snow, Catherine; Leyva, Diana; Barata, Clara; Arbour, MaryCatherine; Rolla, Andrea

    2011-01-01

    Evaluation designs for social programs are developed assuming minimal or no disruption from external shocks, such as natural disasters. This is because extremely rare shocks may not make it worthwhile to account for them in the design. Among extreme shocks is the 2010 Chile earthquake. Un Buen Comienzo (UBC), an ongoing early childhood program in…

  17. Unified scaling law for earthquakes

    PubMed Central

    Christensen, Kim; Danon, Leon; Scanlon, Tim; Bak, Per

    2002-01-01

    We propose and verify a unified scaling law that provides a framework for viewing the probability of the occurrence of earthquakes in a given region and for a given cutoff magnitude. The law shows that earthquakes occur in hierarchical correlated clusters, which overlap with other spatially separated correlated clusters for large enough time periods and areas. For a small enough region and time-scale, only a single correlated group can be sampled. The law links together the Gutenberg–Richter Law, the Omori Law of aftershocks, and the fractal dimensions of the faults. The Omori Law is shown to be the short time limit of general hierarchical phenomenon containing the statistics of both “main shocks” and “aftershocks,” indicating that they are created by the same mechanism. PMID:11875203

  18. Simulating Large-Scale Earthquake Dynamic Rupture Scenarios On Natural Fault Zones Using the ADER-DG Method

    NASA Astrophysics Data System (ADS)

    Gabriel, Alice; Pelties, Christian

    2014-05-01

    In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.

  19. Identification of elastic basin properties by large-scale inverse earthquake wave propagation

    NASA Astrophysics Data System (ADS)

    Epanomeritakis, Ioannis K.

    The importance of the study of earthquake response, from a social and economical standpoint, is a major motivation for the current study. The severe uncertainties involved in the analysis of elastic wave propagation in the interior of the earth increase the difficulty in estimating earthquake impact in seismically active areas. The need for recovery of information about the geological and mechanical properties of underlying soils motivates the attempt to apply inverse analysis on earthquake wave propagation problems. Inversion for elastic properties of soils is formulated as an constrained optimization problem. A series of trial mechanical soil models is tested against a limited-size set of dynamic response measurements, given partial knowledge of the target model and complete information on source characteristics, both temporal and geometric. This inverse analysis gives rise to a powerful method for recovery of a material model that produces the given response. The goal of the current study is the development of a robust and efficient computational inversion methodology for material model identification. Solution methods for gradient-based local optimization combine with robustification and globalization techniques to build an effective inversion framework. A Newton-based approach deals with the complications of the highly nonlinear systems generated in the inversion solution process. Moreover, a key addition to the inversion methodology is the application of regularization techniques for obtaining admissible soil models. Most importantly, the development and use of a multiscale strategy offers globalizing and robustifying advantages to the inversion process. In this study, a collection of results of inversion for different three-dimensional Lame moduli models is presented. The results demonstrate the effectiveness of the inversion methodology proposed and provide evidence for its capabilities. They also show the path for further study of elastic property

  20. From M8 to CyberShake: Using Large-Scale Numerical Simulations to Forecast Earthquake Ground Motions (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Cui, Y.; Olsen, K. B.; Graves, R. W.; Maechling, P. J.; Day, S. M.; Callaghan, S.; Milner, K.; Scec/Cme Collaboration

    2010-12-01

    Large earthquakes cannot be reliably and skillfully predicted in terms of their location, time, and magnitude. However, numerical simulations of seismic radiation from complex fault ruptures and wave propagation through 3D crustal structures have now advanced to the point where they can usefully predict the strong ground motions from anticipated earthquake sources. We describe a set of four computational pathways employed by the Southern California Earthquake Center (SCEC) to execute and validate these simulations. The methods are illustrated using the largest earthquakes anticipated on the southern San Andreas fault system. A dramatic example is the recent M8 dynamic-rupture simulation by Y. Cui, K. Olsen et al. (2010) of a magnitude-8 “wall-to-wall” earthquake on southern San Andreas fault, calculated to seismic frequencies of 2-Hz on a computational grid of 436 billion elements. M8 is the most ambitious earthquake simulation completed to date; the run took 24 hours on 223K cores of the NCCS Jaguar supercomputer, sustaining 220 teraflops. High-performance simulation capabilities have been implemented by SCEC in the CyberShake hazard model for the Los Angeles region. CyberShake computes over 400,000 earthquake simulations, managed through a scientific workflow system, to represent the probabilistic seismic hazard at a particular site up to seismic frequencies of 0.3 Hz. CyberShake shows substantial differences with conventional probabilistic seismic hazard analysis based on empirical ground-motion prediction. At the probability levels appropriate for long-term forecasting, these differences are most significant (and worrisome) in sedimentary basins, where the population is densest and the regional seismic risk is concentrated. The higher basin amplification obtained by CyberShake is due to the strong coupling between rupture directivity and basin-mode excitation. The simulations show that this coupling is enhanced by the tectonic branching structure of the San

  1. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  2. Anthropogenic Triggering of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor ``foreshocks'', since the induction may occur with a delay up to several years.

  3. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years. PMID:25156190

  4. The Magnitude and Energy of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2003-12-01

    Several magnitudes were introduced to quantify large earthquakes better and more comprehensive than Ms: Mw (moment magnitude; Kanamori, 1977), ME (strain energy magnitude; Purcaru and Berckhemer, 1978), Mt (tsunami magnitude; Abe, 1979), Mm (mantle magnitude; Okal and Talandier, 1985), Me (seismic energy magnitude; Choy and Boatwright, 1995). Although these magnitudes are still subject to different uncertainties, various kinds of earthquakes can now be better understood in terms or combinations of them. They can also be viewd as mappings of basic source parameters: seismic moment, strain energy, seismic energy, stress drop, under certain assumptions or constraints. We studied a set of about 90 large earthquakes (shallow and deeper) occurred in different tectonic regimes, with more reliable source parameters, and compared them in terms of the above magnitudes. We found large differences between the strain energy (mapped to ME) and seismic energy (mapped to Me), and between ME of events with about the same Mw. This confirms that no 1-to-1 correspondence exists between these magnitudes (Purcaru, 2002). One major cause of differences for "normal" earthquakes is the level of the stress drop over asperities which release and partition the strain energy. We quantify the energetic balance of earthquakes in terms of strain energy Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) using an extended Hamilton's principle. The earthquakes are thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental. The (scaled) strain energy equation we derived is: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, assuming complete stress drop, using the (static) stress drop variability, and that Est and Es are not in a 1-to-1 correspondence. With all uncertainties, our analysis reveal, for a given seismic moment, a large variation of earthquakes in terms of energies, even in the same seismic region. In view of these, for further understanding

  5. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    Elst, Nicholas J.; Page, Morgan T.; Weiser, Deborah A.; Goebel, Thomas H. W.; Hosseini, S. Mehran

    2016-06-01

    A major question for the hazard posed by injection-induced seismicity is how large induced earthquakes can be. Are their maximum magnitudes determined by injection parameters or by tectonics? Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound. The data pass three specific tests: (1) the largest observed earthquake at each site scales with the log of the total number of induced earthquakes, (2) the order of occurrence of the largest event is random within the induced sequence, and (3) the injected volume controls the total number of earthquakes rather than the total seismic moment. All three tests point to an injection control on earthquake nucleation but a tectonic control on earthquake magnitude. Given that the largest observed earthquakes are exactly as large as expected from the sampling statistics, we should not conclude that these are the largest earthquakes possible. Instead, the results imply that induced earthquake magnitudes should be treated with the same maximum magnitude bound that is currently used to treat seismic hazard from tectonic earthquakes.

  6. Large Rock Slope Failures Induced by Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Aydan, Ö.

    2016-06-01

    Recent earthquakes caused many large-scale rock slope failures. The scale and impact of rock slope failures are very large, and the form of failure differs depending upon the geological structures of slopes. First, the author briefly describes some model experiments to investigate the effects of shaking or faulting due to earthquakes on rock slopes. Then, fundamental characteristics of the rock slope failures induced by the earthquakes are described and evaluated according to some empirical and theoretical models. Furthermore, the observations for slope failures in relation to earthquake magnitude and epicenter or hypocenter distance were compared with several empirical relations available in the literature. Some of major rock slope failures induced by earthquakes are selected, and the post-failure motions are simulated and compared with observations. In addition, the effects of tsunamis on rock slopes in view of observations in the reconnaissances of the recent mega-earthquakes are explained and are discussed.

  7. Investigation of Large Earthquakes as Critical Phase Transitions

    NASA Astrophysics Data System (ADS)

    Gonzalez-Huizar, H.; Mariani, M. C.; Serpa, L. F.; Beccar-Varela, M. P.; Tweneboah, O. K.

    2015-12-01

    In this work we present some of our results from investigating earthquakes sequences, which include very large earthquakes, using different stochastic and deterministic critical phenomena models. With the objective to estimate magnitude and origin time of large earthquakes based on the preceding seismicity, we investigate the use of several modeling techniques, including: The Levy flight, Scale-Invariant functions, and the Ising models. We also developed a stochastic differential equation arising on the superposition of independent Ornstein-Uhlenbeck processes driven by a Gamma (a,b) process. Here we summarize some of the results of applying these techniques for modeling earthquakes sequences in different tectonic regions.

  8. Patterns of seismic activity preceding large earthquakes

    NASA Technical Reports Server (NTRS)

    Shaw, Bruce E.; Carlson, J. M.; Langer, J. S.

    1992-01-01

    A mechanical model of seismic faults is employed to investigate the seismic activities that occur prior to major events. The block-and-spring model dynamically generates a statistical distribution of smaller slipping events that precede large events, and the results satisfy the Gutenberg-Richter law. The scaling behavior during a loading cycle suggests small but systematic variations in space and time with maximum activity acceleration near the future epicenter. Activity patterns inferred from data on seismicity in California demonstrate a regional aspect; increased activity in certain areas are found to precede major earthquake events. One example is given regarding the Loma Prieta earthquake of 1989 which is located near a fault section associated with increased activity levels.

  9. Earthquakes in Action: Incorporating Multimedia, Internet Resources, Large-scale Seismic Data, and 3-D Visualizations into Innovative Activities and Research Projects for Today's High School Students

    NASA Astrophysics Data System (ADS)

    Smith-Konter, B.; Jacobs, A.; Lawrence, K.; Kilb, D.

    2006-12-01

    The most effective means of communicating science to today's "high-tech" students is through the use of visually attractive and animated lessons, hands-on activities, and interactive Internet-based exercises. To address these needs, we have developed Earthquakes in Action, a summer high school enrichment course offered through the California State Summer School for Mathematics and Science (COSMOS) Program at the University of California, San Diego. The summer course consists of classroom lectures, lab experiments, and a final research project designed to foster geophysical innovations, technological inquiries, and effective scientific communication (http://topex.ucsd.edu/cosmos/earthquakes). Course content includes lessons on plate tectonics, seismic wave behavior, seismometer construction, fault characteristics, California seismicity, global seismic hazards, earthquake stress triggering, tsunami generation, and geodetic measurements of the Earth's crust. Students are introduced to these topics through lectures-made-fun using a range of multimedia, including computer animations, videos, and interactive 3-D visualizations. These lessons are further enforced through both hands-on lab experiments and computer-based exercises. Lab experiments included building hand-held seismometers, simulating the frictional behavior of faults using bricks and sandpaper, simulating tsunami generation in a mini-wave pool, and using the Internet to collect global earthquake data on a daily basis and map earthquake locations using a large classroom map. Students also use Internet resources like Google Earth and UNAVCO/EarthScope's Jules Verne Voyager Jr. interactive mapping tool to study Earth Science on a global scale. All computer-based exercises and experiments developed for Earthquakes in Action have been distributed to teachers participating in the 2006 Earthquake Education Workshop, hosted by the Visualization Center at Scripps Institution of Oceanography (http

  10. Induced earthquake magnitudes are as large as (statistically) expected

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.; Weiser, D. A.; Goebel, T.; Hosseini, S. M.

    2015-12-01

    Key questions with implications for seismic hazard and industry practice are how large injection-induced earthquakes can be, and whether their maximum size is smaller than for similarly located tectonic earthquakes. Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. McGarr (JGR 2014) showed that for earthquakes confined to the reservoir and triggered by pore-pressure increase, the maximum moment should be limited to the product of the shear modulus G and total injected volume ΔV. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network, with an absolute maximum magnitude that is notoriously difficult to constrain. A common approach for tectonic earthquakes is to use the magnitude-frequency distribution of smaller earthquakes to forecast the largest earthquake expected in some time period. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter (GR) distribution for tectonic earthquakes, with no assumption of an intrinsic upper bound. The GR law implies that the largest observed earthquake in a sample should scale with the log of the total number induced. We find that the maximum magnitudes at most sites are consistent with this scaling, and that maximum magnitude increases with log ΔV. We find little in the size distribution to distinguish induced from tectonic earthquakes. That being said, the probabilistic estimate exceeds the deterministic GΔV cap only for expected magnitudes larger than ~M6, making a definitive test of the models unlikely in the near future. In the meantime, however, it may be prudent to treat the hazard from induced earthquakes with the same probabilistic machinery used for tectonic earthquakes.

  11. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  12. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  13. Afterslip and viscoelastic relaxation model inferred from the large scale postseismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-03-01

    Megathrust earthquakes of magnitude close to 9 are followed by large scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5-years time span after the 2010 Mw8.8 Maule Megathrust Earthquake (February 27, 2010) over the whole South American continent. With the first two years of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a Low Viscosity Channel along the deepest part of the plate interface and no additional Low Viscosity Wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa.s; and (ii) a Low Viscosity Channel along the plate interface extending from depths of 55 to 135 km with viscosities below 1018 Pa.s.

  14. Multidimensional scaling visualization of earthquake phenomena

    NASA Astrophysics Data System (ADS)

    Lopes, António M.; Machado, J. A. Tenreiro; Pinto, C. M. A.; Galhano, A. M. S. F.

    2014-01-01

    Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn-Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.

  15. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  16. Triggering of volcanic activity by large earthquakes

    NASA Astrophysics Data System (ADS)

    Avouris, D.; Carn, S. A.; Waite, G. P.

    2011-12-01

    Statistical analysis of temporal relationships between large earthquakes and volcanic eruptions suggests seismic waves may trigger eruptions even over great distances, although the causative mechanism is not well constrained. In this study the relationship between large earthquakes and subtle changes in volcanic activity was investigated in order to gain greater insight into the relationship between dynamic stress and volcanic response. Daily measurements from the Ozone Monitoring Instrument (OMI), onboard the Aura satellite, provide constraints on volcanic sulfur dioxide (SO2) emission rates as a measure of subtle changes in activity. An SO2 timeseries was produced from OMI data for thirteen persistently active volcanoes. Seismic surface-wave amplitudes were modeled from the source mechanisms of moment magnitude (Mw) ≥7 earthquakes, and peak dynamic stress (PDS) was calculated. The SO2 timeseries for each volcano was used to calculate a baseline threshold for comparison with post-earthquake emission. Delay times for an SO2 response following each earthquake at each volcano were analyzed and compared to a random catalog. The delay time analysis was inconclusive. However, an analysis based on the occurrence of large earthquakes showed a response at most volcanoes. Using the PDS calculations as a filtering criterion for the earthquake catalog, the SO2 mass for each volcano was analyzed in 28-day windows centered on the earthquake origin time. If the average SO2 mass after the earthquake was greater than an arbitrary percentage of pre-earthquake mass, we identified the volcano as having a response to the event. This window analysis provided insight on what type of volcanic activity is more susceptible to triggering by dynamic stress. The volcanoes with lava lakes included in this study, Ambrym, Gaua, Villarrica, and Erta Ale, showed a clear response to dynamic stress while the volcanoes with lava domes, Merapi, Semeru, and Bagana showed no response at all. Perhaps

  17. Afterslip and Viscoelastic Relaxation Model Inferred from the Large Scale Postseismic Deformation Following the 2010 Mw 8,8 Maule Earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Vigny, C.; Klein, E.; Fleitout, L.; Garaud, J. D.

    2015-12-01

    Postseismic deformation following the large subduction earthquake of Maule (Chile, Mw8.8, February 27th 2010) have been closely monitored with GPS from 70 km up to 2000 km away from the trench. They exhibit a behavior generally similar to that already observed after the Aceh and Tohoku-Oki earthquakes. Vertical uplift is observed on the volcanic arc and a moderate large scale subsidence is associated with sizeable horizontal deformation in the far-field (500-2000km from the trench). In addition, near-field data (70-200km from the trench) feature a rather complex deformation pattern. A 3D FE code (Zebulon Zset) is used to relate these deformation to slip on the plate interface and relaxation in the mantle. The mesh features a spherical shell-portion from the core-mantle boundary to the Earth's surface, extending over more than 60 degrees in latitude and longitude. The overridding and subducting plates are elastic, and the asthenosphere is viscoelastic. A viscoelastic Low Viscosity Channel (LVC) is also introduced along the plate interface. Both the asthenosphere and the channel feature Burger's rheologies and we invert for their mechanical properties and geometrical characteristics simultaneously with the afterslip distribution. The horizontal deformation pattern requires relaxation both in i) the asthenosphere extending down to 270km, with a 'long-term' viscosity of the order of 4.8.1018 Pa.s and ii) in the channel, that has to extend from depth of 50 to 150 km with viscosities slightly below 1018 Pa.s, to fit well the vertical velocity pattern (intense and quick uplift over the Cordillera). Aseismic slip on the plate interface, at shallow depth, is necessary to explain all the characteristics of the near-field displacements. We then detect two main patches of high slip, one updip of the coseismic slip distribution in the northernmost part of the rupture zone, and the other one downdip, at the latitude of Constitucion (35°S). We finally study the temporel

  18. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  19. Increased correlation range of seismicity before large events manifested by earthquake chains

    NASA Astrophysics Data System (ADS)

    Shebalin, P.

    2006-10-01

    "Earthquake chains" are clusters of moderate-size earthquakes which extend over large distances and are formed by statistically rare pairs of events that are close in space and time ("neighbors"). Earthquake chains are supposed to be precursors of large earthquakes with lead times of a few months. Here we substantiate this hypothesis by mass testing it using a random earthquake catalog. Also, we study stability under variation of parameters and some properties of the chains. We found two invariant parameters: they characterize the spatial and energy scales of earthquake correlation. Both parameters of the chains show good correlation with the magnitudes of the earthquakes they precede. Earthquake chains are known as the first stage of the earthquake prediction algorithm reverse tracing of precursors (RTP) now tested in forward prediction. A discussion of the complete RTP algorithm is outside the scope of this paper, but the results presented here are important to substantiate the RTP approach.

  20. Scaling in geology: landforms and earthquakes.

    PubMed Central

    Turcotte, D L

    1995-01-01

    Landforms and earthquakes appear to be extremely complex; yet, there is order in the complexity. Both satisfy fractal statistics in a variety of ways. A basic question is whether the fractal behavior is due to scale invariance or is the signature of a broadly applicable class of physical processes. Both landscape evolution and regional seismicity appear to be examples of self-organized critical phenomena. A variety of statistical models have been proposed to model landforms, including diffusion-limited aggregation, self-avoiding percolation, and cellular automata. Many authors have studied the behavior of multiple slider-block models, both in terms of the rupture of a fault to generate an earthquake and in terms of the interactions between faults associated with regional seismicity. The slider-block models exhibit a remarkably rich spectrum of behavior; two slider blocks can exhibit low-order chaotic behavior. Large numbers of slider blocks clearly exhibit self-organized critical behavior. Images Fig. 6 PMID:11607562

  1. Recurrent slow slip event reveals the interaction with seismic slow earthquakes and disruption from large earthquake

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Moore, Angelyn W.; Owen, Susan

    2015-09-01

    It remains enigmatic how slow slip events (SSEs) interact with other slow seismic events and large distant earthquakes at many subduction zones. Here we model the spatiotemporal slip evolution of the most recent long-term SSE in 2009-2011 in the Bungo Channel region, southwest Japan using GEONET GPS position time-series and a Kalman filter-based, time-dependent slip inversion method. We examine the space-time relationship between the geodetically determined slow slip transient and seismically observed low frequency earthquakes (LFEs) and very-low frequency earthquakes (V-LFEs) near the Nankai trough. We find a strong but distinct temporal correlation between transient slip and LFEs and V-LFEs, suggesting a different relationship to the SSE. We also find the great Tohoku-Oki earthquake appears to disrupt the normal source process of the SSE, probably reflecting large-scale stress redistribution caused by the earthquake. Comparison of the 2009-2011 SSE with others in the same region shows much similarity in slip and moment release, confirming its recurrent nature. Comparison of transient slip with plate coupling shows that slip transients mainly concentrate on the transition zone from strong coupling region to downdip LFEs with transient slip relieving elastic strain accumulation at transitional depth. The less consistent spatial correlation between the long-term SSE and seismic slow earthquakes, and susceptibility of these slow earthquakes to various triggering sources including long-term slow slip, suggests caution in using the seismically determined slow earthquakes as a proxy for slow slip.

  2. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  3. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2015-10-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  4. Earthquake Hazard and the Environmental Seismic Intensity (ESI) Scale

    NASA Astrophysics Data System (ADS)

    Serva, Leonello; Vittori, Eutizio; Comerci, Valerio; Esposito, Eliana; Guerrieri, Luca; Michetti, Alessandro Maria; Mohammadioun, Bagher; Mohammadioun, Georgianna C.; Porfido, Sabina; Tatevossian, Ruben E.

    2016-05-01

    The main objective of this paper was to introduce the Environmental Seismic Intensity scale (ESI), a new scale developed and tested by an interdisciplinary group of scientists (geologists, geophysicists and seismologists) in the frame of the International Union for Quaternary Research (INQUA) activities, to the widest community of earth scientists and engineers dealing with seismic hazard assessment. This scale defines earthquake intensity by taking into consideration the occurrence, size and areal distribution of earthquake environmental effects (EEE), including surface faulting, tectonic uplift and subsidence, landslides, rock falls, liquefaction, ground collapse and tsunami waves. Indeed, EEEs can significantly improve the evaluation of seismic intensity, which still remains a critical parameter for a realistic seismic hazard assessment, allowing to compare historical and modern earthquakes. Moreover, as shown by recent moderate to large earthquakes, geological effects often cause severe damage"; therefore, their consideration in the earthquake risk scenario is crucial for all stakeholders, especially urban planners, geotechnical and structural engineers, hazard analysts, civil protection agencies and insurance companies. The paper describes background and construction principles of the scale and presents some case studies in different continents and tectonic settings to illustrate its relevant benefits. ESI is normally used together with traditional intensity scales, which, unfortunately, tend to saturate in the highest degrees. In this case and in unpopulated areas, ESI offers a unique way for assessing a reliable earthquake intensity. Finally, yet importantly, the ESI scale also provides a very convenient guideline for the survey of EEEs in earthquake-stricken areas, ensuring they are catalogued in a complete and homogeneous manner.

  5. An Energy Rate Magnitude for Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Newman, A. V.; Convers, J. A.

    2008-12-01

    The ability to rapidly assess the approximate size of very large and destructive earthquakes is important for early hazard mitigation from both strong shaking and potential tsunami generation. Using a methodology to rapidly determine earthquake energy and duration using teleseismic high-frequency energy, we develop an adaptation to approximate the magnitude of a very large earthquake before the full duration of rupture can be measured at available teleseismic stations. We utilize available vertical component data to analyze the high-frequency energy growth between 0.5 and 2 Hz, minimizing the effect of later arrivals that are mostly attenuated in this range. Because events smaller than M~6.5 occur rapidly, this method is most adequate for larger events, whose rupture duration exceeds ~20 seconds. Using a catalog of about 200 large and great earthquakes we compare the high-frequency energy rate (· Ehf) to the total broad- band energy (· Ebb) to find a relationship for: Log(· Ehf)/Log(Ebb)≍ 0.85. Hence, combining this relation to the broad-band energy magnitude (Me) [Choy and Boatwright, 1995], yields a new high-frequency energy rate magnitude: M· E=⅔ log10(· Ehf)/0.85-2.9. Such an empirical approach can thus be used to obtain a reasonable assessment of an event magnitude from the initial estimate of energy growth, even before the arrival of the full direct-P rupture signal. For large shallow events thus far examined, the M· E predicts the ultimate Me to within ±0.2 units of M. For fast rupturing deep earthquakes M· E overpredicts, while for slow-rupturing tsunami earthquakes M· E underpredicts Me likely due to material strength changes at the source rupture. We will report on the utility of this method in both research mode, and in real-time scenarios when data availability is limited. Because the high-frequency energy is clearly discernable in real-time, this result suggests that the growth of energy can be used as a good initial indicator of the

  6. Earthquakes

    ERIC Educational Resources Information Center

    Roper, Paul J.; Roper, Jere Gerard

    1974-01-01

    Describes the causes and effects of earthquakes, defines the meaning of magnitude (measured on the Richter Magnitude Scale) and intensity (measured on a modified Mercalli Intensity Scale) and discusses earthquake prediction and control. (JR)

  7. Mw Dependence of Ionospheric Electron Enhancement Immediately Before Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Heki, K.; He, L.

    2015-12-01

    Ionospheric electrons were reported to have increased ~40 minutes before the 2011 Tohoku-oki (Mw9.0) earthquake, Japan, by observing total electron content (TEC) with GNSS receivers [e.g. Heki and Enomoto, 2013]. They further demonstrated that similar TEC enhancements preceded all the recent earthquakes with Mw of 8.5 or more. Their reality has been repeatedly questioned due mainly to the ambiguity in the derivation of the reference TEC curves from which anomalies are defined [e.g. Masci et al., 2015]. Here we propose a numerical approach, based on Akaike's Information Criterion, to detect positive breaks (sudden increase of TEC rate) in the vertical TEC time series without using reference curves. We demonstrate that such breaks are detected 20-80 minutes before the ten recent large earthquakes with Mw7.8-9.2. The amounts of breaks were found to depend on the background absolute VTEC and Mw, i.e. Break (TECU/h)=4.74Mw+0.13VTEC-39.86, with the standard deviation of ~1.2 TECU/h. We can convert this equation to Mw = (Break-0.13VTEC+39.86)/4.74, which can tell us the Mw of impending earthquakes with uncertainty of ~0.25. The precursor times were longer for larger earthquakes, ranging from ~80 minutes for the largest (2004 Sumatra-Andaman) to ~21 minutes for the smallest (2015 Nepal). The precursors of intraplate earthquakes (e.g. 2012 Indian Ocean) started significantly earlier than interplate ones. We performed the same analyses during periods without earthquakes, and found that positive breaks comparable to that before the 2011 Tohoku-oki earthquake occur once in 20 hours. They originate from small amplitude Large-scale Travelling Ionospheric Disturbances (LSTID), which are excited in the auroral oval and move southward with the velocity of internal gravity waves. This probability is small enough to rule out the fortuity of these breaks, but large enough to make it a challenge to apply preseismic TEC enhancements for short-term earthquake prediction.

  8. Quantitative Earthquake Prediction on Global and Regional Scales

    SciTech Connect

    Kossobokov, Vladimir G.

    2006-03-23

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  9. Rapid Characterization of Large Earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Barrientos, S. E.; Team, C.

    2015-12-01

    Chile, along 3000 km of it 4200 km long coast, is regularly affected by very large earthquakes (up to magnitude 9.5) resulting from the convergence and subduction of the Nazca plate beneath the South American plate. These megathrust earthquakes exhibit long rupture regions reaching several hundreds of km with fault displacements of several tens of meters. Minimum delay characterization of these giant events to establish their rupture extent and slip distribution is of the utmost importance for rapid estimations of the shaking area and their corresponding tsunami-genic potential evaluation, particularly when there are only few minutes to warn the coastal population for immediate actions. The task of a rapid evaluation of large earthquakes is accomplished in Chile through a network of sensors being implemented by the National Seismological Center of the University of Chile. The network is mainly composed approximately by one hundred broad-band and strong motion instruments and 130 GNSS devices; all will be connected in real time. Forty units present an optional RTX capability, where satellite orbits and clock corrections are sent to the field device producing a 1-Hz stream at 4-cm level. Tests are being conducted to stream the real-time raw data to be later processed at the central facility. Hypocentral locations and magnitudes are estimated after few minutes by automatic processing software based on wave arrival; for magnitudes less than 7.0 the rapid estimation works within acceptable bounds. For larger events, we are currently developing automatic detectors and amplitude estimators of displacement coming out from the real time GNSS streams. This software has been tested for several cases showing that, for plate interface events, the minimum magnitude threshold detectability reaches values within 6.2 and 6.5 (1-2 cm coastal displacement), providing an excellent tool for earthquake early characterization from a tsunamigenic perspective.

  10. Foreshock occurrence rates before large earthquakes worldwide

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Global rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured, using earthquakes listed in the Harvard CMT catalog for the period 1978-1996. These rates are similar to rates ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering, which is based on patterns of small and moderate aftershocks in California, and were found to exceed the California model by a factor of approximately 2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events a large majority, composed of events located in shallow subduction zones, registered a high foreshock rate, while a minority, located in continental thrust belts, measured a low rate. These differences may explain why previous surveys have revealed low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggest the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich.

  11. Earthquakes.

    ERIC Educational Resources Information Center

    Walter, Edward J.

    1977-01-01

    Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

  12. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Razafindrakoto, Hoby N. T.; Mai, P. Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran K. S.

    2015-07-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  13. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2016-06-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  14. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  15. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  16. Large Earthquake Potential in the Southeast Caribbean

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Mora-Paez, H.; Bilham, R. G.; Lafemina, P.; Mattioli, G. S.; Molnar, P. H.; Audemard, F. A.; Perez, O. J.

    2015-12-01

    The axis of rotation describing relative motion of the Caribbean plate with respect to South America lies in Canada near Hudson's Bay, such that the Caribbean plate moves nearly due east relative to South America [DeMets et al. 2010]. The plate motion is absorbed largely by pure strike slip motion along the El Pilar Fault in northeastern Venezuela, but in northwestern Venezuela and northeastern Colombia, the relative motion is distributed over a wide zone that extends from offshore to the northeasterly trending Mérida Andes, with the resolved component of convergence between the Caribbean and South American plates estimated at ~10 mm/yr. Recent densification of GPS networks through COLOVEN and COCONet including access to private GPS data maintained by Colombia and Venezuela allowed the development of a new GPS velocity field. The velocity field, processed with JPL's GOA 6.2, JPL non-fiducial final orbit and clock products and VMF tropospheric products, includes over 120 continuous and campaign stations. This new velocity field along with enhanced seismic reflection profiles, and earthquake location analysis strongly suggest the existence of an active oblique subduction zone. We have also been able to use broadband data from Venezuela to search slow-slip events as an indicator of an active subduction zone. There are caveats to this hypothesis, however, including the absence of volcanism that is typically concurrent with active subduction zones and a weak historical record of great earthquakes. A single tsunami deposit dated at 1500 years before present has been identified on the southeast Yucatan peninsula. Our simulations indicate its probable origin is within our study area. We present a new GPS-derived velocity field, which has been used to improve a regional block model [based on Mora and LaFemina, 2009-2012] and discuss the earthquake and tsunami hazards implied by this model. Based on the new geodetic constraints and our updated block model, if part of the

  17. Regional Triggering of Volcanic Activity Following Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Hill-Butler, Charley; Blackett, Matthew; Wright, Robert

    2015-04-01

    There are numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events. In fact, since 1950, all large magnitude earthquakes have been followed by volcanic eruptions in the following year - 1952 Kamchatka M9.2, 1960 Chile M9.5, 1964 Alaska M9.2, 2004 & 2005 Sumatra-Andaman M9.3 & M8.7 and 2011 Japan M9.0. While at a global scale, 56% of all large earthquakes (M≥8.0) in the 21st century were followed by increases in thermal activity. The most significant change in volcanic activity occurred between December 2004 and April 2005 following the M9.1 December 2004 earthquake after which new eruptions were detected at 10 volcanoes and global volcanic flux doubled over 52 days (Hill-Butler et al. 2014). The ability to determine a volcano's activity or 'response', however, has resulted in a number of disparities with <50% of all volcanoes being monitored by ground-based instruments. The advent of satellite remote sensing for volcanology has, therefore, provided researchers with an opportunity to quantify the timing, magnitude and character of volcanic events. Using data acquired from the MODVOLC algorithm, this research examines a globally comparable database of satellite-derived radiant flux alongside USGS NEIC data to identify changes in volcanic activity following an earthquake, February 2000 - December 2012. Using an estimate of background temperature obtained from the MODIS Land Surface Temperature (LST) product (Wright et al. 2014), thermal radiance was converted to radiant flux following the method of Kaufman et al. (1998). The resulting heat flux inventory was then compared to all seismic events (M≥6.0) within 1000 km of each volcano to evaluate if changes in volcanic heat flux correlate with regional earthquakes. This presentation will first identify relationships at the temporal and spatial scale, more complex relationships obtained by machine learning algorithms will then be examined to establish favourable

  18. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  19. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  20. Time-Dependent Earthquake Forecasts on a Global Scale

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.; Graves, W. R.

    2014-12-01

    We develop and implement a new type of global earthquake forecast. Our forecast is a perturbation on a smoothed seismicity (Relative Intensity) spatial forecast combined with a temporal time-averaged ("Poisson") forecast. A variety of statistical and fault-system models have been discussed for use in computing forecast probabilities. An example is the Working Group on California Earthquake Probabilities, which has been using fault-based models to compute conditional probabilities in California since 1988. An example of a forecast is the Epidemic-Type Aftershock Sequence (ETAS), which is based on the Gutenberg-Richter (GR) magnitude-frequency law, the Omori aftershock law, and Poisson statistics. The method discussed in this talk is based on the observation that GR statistics characterize seismicity for all space and time. Small magnitude event counts (quake counts) are used as "markers" for the approach of large events. More specifically, if the GR b-value = 1, then for every 1000 M>3 earthquakes, one expects 1 M>6 earthquake. So if ~1000 M>3 events have occurred in a spatial region since the last M>6 earthquake, another M>6 earthquake should be expected soon. In physics, event count models have been called natural time models, since counts of small events represent a physical or natural time scale characterizing the system dynamics. In a previous research, we used conditional Weibull statistics to convert event counts into a temporal probability for a given fixed region. In the present paper, we move belyond a fixed region, and develop a method to compute these Natural Time Weibull (NTW) forecasts on a global scale, using an internally consistent method, in regions of arbitrary shape and size. We develop and implement these methods on a modern web-service computing platform, which can be found at www.openhazards.com and www.quakesim.org. We also discuss constraints on the User Interface (UI) that follow from practical considerations of site usability.

  1. Absence of remotely triggered large earthquakes beyond the mainshock region

    USGS Publications Warehouse

    Parsons, T.; Velasco, A.A.

    2011-01-01

    Large earthquakes are known to trigger earthquakes elsewhere. Damaging large aftershocks occur close to the mainshock and microearthquakes are triggered by passing seismic waves at significant distances from the mainshock. It is unclear, however, whether bigger, more damaging earthquakes are routinely triggered at distances far from the mainshock, heightening the global seismic hazard after every large earthquake. Here we assemble a catalogue of all possible earthquakes greater than M 5 that might have been triggered by every M 7 or larger mainshock during the past 30 years. We compare the timing of earthquakes greater than M 5 with the temporal and spatial passage of surface waves generated by large earthquakes using a complete worldwide catalogue. Whereas small earthquakes are triggered immediately during the passage of surface waves at all spatial ranges, we find no significant temporal association between surface-wave arrivals and larger earthquakes. We observe a significant increase in the rate of seismic activity at distances confined to within two to three rupture lengths of the mainshock. Thus, we conclude that the regional hazard of larger earthquakes is increased after a mainshock, but the global hazard is not.

  2. Detection of hydrothermal precursors to large northern california earthquakes.

    PubMed

    Silver, P G; Valette-Silver, N J

    1992-09-01

    During the period 1973 to 1991 the interval between eruptions from a periodic geyser in Northern California exhibited precursory variations 1 to 3 days before the three largest earthquakes within a 250-kilometer radius of the geyser. These include the magnitude 7.1 Loma Prieta earthquake of 18 October 1989 for which a similar preseismic signal was recorded by a strainmeter located halfway between the geyser and the earthquake. These data show that at least some earthquakes possess observable precursors, one of the prerequisites for successful earthquake prediction. All three earthquakes were further than 130 kilometers from the geyser, suggesting that precursors might be more easily found around rather than within the ultimate rupture zone of large California earthquakes. PMID:17738277

  3. Evidence for a difference in rupture initiation between small and large earthquakes.

    PubMed

    Colombelli, S; Zollo, A; Festa, G; Picozzi, M

    2014-01-01

    The process of earthquake rupture nucleation and propagation has been investigated through laboratory experiments and theoretical modelling, but a limited number of observations exist at the scale of earthquake fault zones. Distinct models have been proposed, and whether the magnitude can be predicted while the rupture is ongoing represents an unsolved question. Here we show that the evolution of P-wave peak displacement with time is informative regarding the early stage of the rupture process and can be used as a proxy for the final size of the rupture. For the analysed earthquake set, we found a rapid initial increase of the peak displacement for small events and a slower growth for large earthquakes. Our results indicate that earthquakes occurring in a region with a large critical slip distance have a greater likelihood of growing into a large rupture than those originating in a region with a smaller slip-weakening distance. PMID:24887597

  4. Earthquake Scaling and Development of Ground Motion Prediction for Earthquake Hazard Mitigation in Taiwan

    NASA Astrophysics Data System (ADS)

    Ma, K.; Yen, Y.

    2011-12-01

    For earthquake hazard mitigation toward risk management, integration study from development of source model to ground motion prediction is crucial. The simulation for high frequency component ( > 1 Hz) of strong ground motions in the near field was not well resolved due to the insufficient resolution in velocity structure. Using the small events as Green's functions (i.e. empirical Green's function (EGF) method) can resolve the problem of lack of precise velocity structure to replace the path effect evaluation. If the EGF is not available, a stochastic Green's function (SGF) method can be employed. Through characterizing the slip models derived from the waveform inversion, we directly extract the parameters needed for the ground motion prediction in the EGF method or the SGF method. The slip models had been investigated from Taiwan dense strong motion and global teleseismic data. In addition, the low frequency ( < 1 Hz) can obtained numerically by the Frequency-Wavenumber (FK) method. Thus, broadband frequency strong ground motion can be calculated by a hybrid method that combining a deterministic FK method for the low frequency simulation and the EGF or SGF method for high frequency simulation. Characterizing the definitive source parameters from the empirical scaling study can provide directly to the ground motion simulation. To give the ground motion prediction for a scenario earthquake, we compiled the earthquake scaling relationship from the inverted finite-fault models of moderate to large earthquakes in Taiwan. The studies show the significant involvement of the seismogenic depth to the development of rupture width. In addition to that, several earthquakes from blind fault show distinct large stress drop, which yield regional high PGA. According to the developing scaling relationship and the possible high stress drops for earthquake from blind faults, we further deploy the hybrid method mentioned above to give the simulation of the strong motion in

  5. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can

  6. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  7. The Effect of Damage on Earthquake Scaling and Forecasting

    NASA Astrophysics Data System (ADS)

    Klein, W.; Serino, C.; Tiampo, K. F.; Rundle, J. B.

    2010-12-01

    We modify simple models of earthquake faults that have Gutenburg-Richter scaling associated with a critical point to include damage. We find that increasing the amount of damage drives the system away from the critical point and decreases the region of scaling for an individual fault. However, the scaling of the collection of faults with a range of damage extends over a large moment range with an exponent different than that of individual faults without damage. In addition the data indicates that in fault models with large amounts of damage accelerated moment release(AMR) is a reliable indicator of a catastrophic event. In models with little or no damage, however, using AMR as an indicator will result in a large number of false positives.

  8. Modeling fast and slow earthquakes at various scales

    PubMed Central

    IDE, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes. PMID:25311138

  9. Global coseismic deformations, GNSS time series analysis, and earthquake scaling laws

    NASA Astrophysics Data System (ADS)

    Métivier, Laurent; Collilieux, Xavier; Lercier, Daphné; Altamimi, Zuheir; Beauducel, François

    2014-12-01

    We investigate how two decades of coseismic deformations affect time series of GPS station coordinates (Global Navigation Satellite System) and what constraints geodetic observations give on earthquake scaling laws. We developed a simple but rapid model for coseismic deformations, assuming different earthquake scaling relations, that we systematically applied on earthquakes with magnitude larger than 4. We found that coseismic displacements accumulated during the last two decades can be larger than 10 m locally and that the cumulative displacement is not only due to large earthquakes but also to the accumulation of many small motions induced by smaller earthquakes. Then, investigating a global network of GPS stations, we demonstrate that a systematic global modeling of coseismic deformations helps greatly to detect discontinuities in GPS coordinate time series, which are still today one of the major sources of error in terrestrial reference frame construction (e.g., the International Terrestrial Reference Frame). We show that numerous discontinuities induced by earthquakes are too small to be visually detected because of seasonal variations and GPS noise that disturb their identification. However, not taking these discontinuities into account has a large impact on the station velocity estimation, considering today's precision requirements. Finally, six groups of earthquake scaling laws were tested. Comparisons with our GPS time series analysis on dedicated earthquakes give insights on the consistency of these scaling laws with geodetic observations and Okada coseismic approach.

  10. The foreshock sequence of large earthquakes: slow slip or cascade triggering?

    NASA Astrophysics Data System (ADS)

    Huang, H.; Meng, L.

    2014-12-01

    Large earthquakes such as the 2011 Mw 9.0 Tohoku-Oki earthquake and the 2014 Mw 8.1 Iquique earthquake are often preceded by foreshock sequences migrating toward the hypocenters of mainshocks. Understanding the underlying physical processes is crucial for imminent seismic hazard assessment. Some of these foreshock sequences are accompanied by repeating earthquakes, which are thought to be a manifestation of a large-scale background slow slip transient. The alternative interpretation is that the migrating seismicity is simply produced by the cascade triggering of mainshock-aftershock sequences following Omori's Law. In this case the repeating earthquakes are driven by the afterslip of the moderate to large foreshocks instead of an independent slow slip event. As an initial effort to discriminate these two hypotheses, we made a detailed analysis of the repeating earthquakes among the foreshock sequences of the 2014 Mw 8.1 Iquique earthquake. We observed that some significant foreshocks (M >= 5.5) are followed by the rapid occurrences of local repeaters, suggesting the contribution of afterslip. However the repeaters are distributed in a wide area (~40*80 km), which are difficult to be driven by only a few moderate to large foreshocks. Furthermore, the estimated repeater-inferred aseismic moment during the foreshock period is at least 3.041e19 Nm (5*5 km grid), which is of the same order with the total amount of seismic moment of all foreshocks (2.251e19 Nm). This comparison again supports the slow-slip model since the ratio of post-seismic to coseismic moment is small in most earthquakes. To estimate the contributions of the transient slow slip and cascade triggering in the initiation of large earthquakes, we propose to systematically search and analyze repeating earthquakes in all foreshock sequences preceding large earthquakes. The next effort will be made to the long precursory phase of large interplate earthquakes such as the 1999 Mw 7.6 Izimit earthquake and the

  11. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  12. Detection capability of global earthquakes influenced by large intermediate-depth and deep earthquakes

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2011-12-01

    This study examined the detection capability of the global CMT catalogue immediately after a large intermediate-depth (70 < depth ≤ 300 km) or deep (300 km < depth) earthquake. Iwata [2008, GJI] have revealed that the detection capability is remarkably lower than ordinary one for several hours after the occurrence of a large shallow (depth ≤ 70 km) earthquake. Since the global CMT catalogue plays an important role in studies on global earthquake forecasting or seismicity pattern [e.g., Kagan and Jackson, 2010, Pageoph], the characteristic of the catalogue should be investigated carefully. We stacked global shallow earthquake sequences, which are taken from the global CMT catalogue from 1977 to 2010, after a large intermediate-depth or deep earthquake. Then, we utilized a statistical model representing an observed magnitude-frequency distribution of earthquakes [e.g., Ringdal, 1975, BSSA; Ogata and Katsura, 1993, GJI]. The applied model is a product of the Gutenberg-Richter law and a detection rate function q(M). Following previous studies, the cumulative distribution of the normal distribution was used as q(M). This model enables us to estimate μ, the magnitude where the detection rate of earthquake is 50 per cent. Finally, a Bayesian approach with a piecewise linear approximation [Iwata, 2008, GJI] was applied to this stacked data to estimate the temporal change of μ. Consequently, we found a significantly lowered detection capability after a intermediate-depth or deep earthquake of which magnitude is 6.5 or larger. The lowered detection capability lasts for several hours or one-half day. During this period of low detection capability, a few per cent of M ≥ 6.0 earthquakes or a few tens percent of M ≥ 5.5 earthquakes are undetected in the global CMT catalogue while the magnitude completeness threshold of the catalogue was estimated to be around 5.5 [e.g., Kagan, 2003, PEPI].

  13. The velocity effects of large historical earthquakes in Chinese mainland

    NASA Astrophysics Data System (ADS)

    Tan, Weijie; Dong, Danan; Wu, Bin

    2016-04-01

    Accompanying with the collision between Indian and Eurasian plates, China has experienced decadal large earthquakes over the past 100 years. These large earthquakes are mainly located along several seismic belts in Tien Shan, Tibet Plateau, and Northern China. The postseismic deformation and stress accumulation induced by the historical earthquakes is important for assess the contemporary seismic hazards. The postseismic deformation induced by historical large earthquakes also influences the observed present day velocity field. The relaxation of the viscoelastic asthenosphere is modeled on a layered spherically symmetric earth with Maxwell rheology. The layer's thickness, the density p and the P-wave velocity Vp are from PREM. The shear modulus are derived from the p and Vp. The viscosity between lower crust and upper mantle adopted in this study is 1×1019 Pa.s. Viscoelastic relaxation contributions due to 34 historical large earthquakes in China from 1900 to 2001 are calculated using VISCO1D-v3 program developed by Pollitz (1997). We calculated the model predicted velocity field in 2015 in China caused by historical big earthquakes. The pattern of predicted velocity field is consistent with the present movement of crust, with peak velocities reaching 6mm yr‑1. The region of Southwestern China moves northeastwards, and also a significant rotation occurred at the edge of the Tibetan Plateau. The velocity field caused by historical large earthquakes provides a base to isolate the velocity field caused by the contemporary tectonic movement from the geodetic observations. It also provides critical information to investigate the regional stress accumulation and to assess the mid-term to long-term earthquake risk.

  14. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  15. Deeper penetration of large earthquakes on seismically quiescent faults.

    PubMed

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard. PMID:27284188

  16. Deeper penetration of large earthquakes on seismically quiescent faults

    NASA Astrophysics Data System (ADS)

    Jiang, Junle; Lapusta, Nadia

    2016-06-01

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard.

  17. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  18. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  19. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    PubMed

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide. PMID:22410538

  20. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-07-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  1. 1/f and the Earthquake Problem: Scaling constraints that facilitate operational earthquake forecasting

    NASA Astrophysics Data System (ADS)

    yoder, M. R.; Rundle, J. B.; Turcotte, D. L.

    2012-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard

  2. 1/f and the Earthquake Problem: Scaling constraints to facilitate operational earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Yoder, M. R.; Rundle, J. B.; Glasscoe, M. T.

    2013-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or '1/f', nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this '1/f problem,' it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area), in combination with a metric to quantify rate trends in local seismicity, to the local earthquake magnitude potential - the magnitudes of earthquakes the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.

  3. Large earthquake processes in the northern Vanuatu subduction zone

    NASA Astrophysics Data System (ADS)

    Cleveland, K. Michael; Ammon, Charles J.; Lay, Thorne

    2014-12-01

    The northern Vanuatu (formerly New Hebrides) subduction zone (11°S to 14°S) has experienced large shallow thrust earthquakes with Mw > 7 in 1966 (MS 7.9, 7.3), 1980 (Mw 7.5, 7.7), 1997 (Mw 7.7), 2009 (Mw 7.7, 7.8, 7.4), and 2013 (Mw 8.0). We analyze seismic data from the latter four earthquake sequences to quantify the rupture processes of these large earthquakes. The 7 October 2009 earthquakes occurred in close spatial proximity over about 1 h in the same region as the July 1980 doublet. Both sequences activated widespread seismicity along the northern Vanuatu subduction zone. The focal mechanisms indicate interplate thrusting, but there are differences in waveforms that establish that the events are not exact repeats. With an epicenter near the 1980 and 2009 events, the 1997 earthquake appears to have been a shallow intraslab rupture below the megathrust, with strong southward directivity favoring a steeply dipping plane. Some triggered interplate thrusting events occurred as part of this sequence. The 1966 doublet ruptured north of the 1980 and 2009 events and also produced widespread aftershock activity. The 2013 earthquake rupture propagated southward from the northern corner of the trench with shallow slip that generated a substantial tsunami. The repeated occurrence of large earthquake doublets along the northern Vanuatu subduction zone is remarkable considering the doublets likely involved overlapping, yet different combinations of asperities. The frequent occurrence of large doublet events and rapid aftershock expansion in this region indicate the presence of small, irregularly spaced asperities along the plate interface.

  4. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes - Semi-Periodicity Analysis of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-07-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  5. An earthquake strength scale for the media and the public

    USGS Publications Warehouse

    Johnston, A.C.

    1990-01-01

    A local engineer, E.P Hailey, pointed this problem out to me shortly after the Loma Prieta earthquake. He felt that three problems limited the usefulness of magnitude in describing an earthquake to the public; (1) most people don't understand that it is not a linear scale; (2) of those who do realized the scale is not linear, very few understand the difference of a factor of ten in ground motion and 32 in energy release between points on the scale; and (3) even those who understand the first two points have trouble putting a given magnitude value into terms they can relate to. In summary, Mr. Hailey wondered why seismologists can't come up with an earthquake scale that doesn't confuse everyone and that conveys a sense of true relative size. Here, then, is m attempt to construct such a scale

  6. Numerical simulations of large earthquakes: Dynamic rupture propagation on heterogeneous faults

    USGS Publications Warehouse

    Harris, R.A.

    2004-01-01

    Our current conceptions of earthquake rupture dynamics, especially for large earthquakes, require knowledge of the geometry of the faults involved in the rupture, the material properties of the rocks surrounding the faults, the initial state of stress on the faults, and a constitutive formulation that determines when the faults can slip. In numerical simulations each of these factors appears to play a significant role in rupture propagation, at the kilometer length scale. Observational evidence of the earth indicates that at least the first three of the elements, geometry, material, and stress, can vary over many scale dimensions. Future research on earthquake rupture dynamics needs to consider at which length scales these features are significant in affecting rupture propagation. ?? Birkha??user Verlag, Basel, 2004.

  7. Large intermediate-depth earthquakes and the subduction process

    NASA Astrophysics Data System (ADS)

    Astiz, Luciana; Lay, Thorne; Kanamori, Hiroo

    1988-12-01

    This study provides an overview of intermediate-depth earthquake phenomena, placing emphasis on the larger, tectonically significant events, and exploring the relation of intermediate-depth earthquakes to shallower seismicity. Especially, we examine whether intermediate-depth events reflect the state of interplate coupling at subduction zones, and whether this activity exhibits temporal changes associated with the occurrence of large underthrusting earthquakes. Historic record of large intraplate earthquakes ( mB ≥ 7.0) in this century shows that the New Hebrides and Tonga subduction zones have the largest number of large intraplate events. Regions associated with bends in the subducted lithosphere also have many large events (e.g. Altiplano and New Ireland). We compiled a catalog of focal mechanisms for events that occurred between 1960 and 1984 with M > 6 and depth between 40 and 200 km. The final catalog includes 335 events with 47 new focal mechanisms, and is probably complete for earthquakes with mB ≥ 6.5. For events with M ≥ 6.5, nearly 48% of the events had no aftershocks and only 15% of the events had more than five aftershocks within one week of the mainshock. Events with more than ten aftershocks are located in regions associated with bends in the subducted slab. Focal mechanism solutions for intermediate-depth earthquakes with M > 6.8 can be grouped into four categories: (1) Normal-fault events (44%), and (2) reverse-fault events (33%), both with a strike nearly parallel to the trench axis. (3) Normal or reverse-fault events with a strike significantly oblique to the trench axis (10%), and (4) tear-faulting events (13%). The focal mechanisms of type 1 events occur mainly along strongly or moderately coupled subduction zones where a down-dip extensional stress prevails in a gently dipping plate. In contrast, along decoupled subduction zones great normal-fault earthquakes occur at shallow depths (e.g., the 1977 Sumbawa earthquake in Indonesia). Type

  8. Scaling and Nucleation in Models of Earthquake Faults

    SciTech Connect

    Klein, W.; Ferguson, C.; Rundle, J.

    1997-05-01

    We present an analysis of a slider block model of an earthquake fault which indicates the presence of metastable states ending in spinodals. We identify four parameters whose values determine the size and statistical distribution of the {open_quotes}earthquake{close_quotes} events. For values of these parameters consistent with real faults we obtain scaling of events associated not with critical point fluctuations but with the presence of nucleation events. {copyright} {ital 1997} {ital The American Physical Society}

  9. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  10. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed

    Aki, K

    1996-04-30

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  11. Scale dependence in earthquake phenomena and its relevance to earthquake prediction.

    PubMed Central

    Aki, K

    1996-01-01

    The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity. PMID:11607659

  12. Large historical earthquakes and tsunamis in a very active tectonic rift: the Gulf of Corinth, Greece

    NASA Astrophysics Data System (ADS)

    Triantafyllou, Ioanna; Papadopoulos, Gerassimos

    2014-05-01

    The Gulf of Corinth is an active tectonic rift controlled by E-W trending normal faults with an uplifted footwall in the south and a subsiding hangingwall with antithetic faulting in the north. Regional geodetic extension rates up to about 1.5 cm/yr have been measured, which is one of the highest for tectonic rifts in the entire Earth, while seismic slip rates up to about 1 cm/yr were estimated. Large earthquakes with magnitudes, M, up to about 7 were historically documented and instrumentally recorded. In this paper we have compiled historical documentation of earthquake and tsunami events occurring in the Corinth Gulf from the antiquity up to the present. The completeness of the events reported improves with time particularly after the 15th century. The majority of tsunamis were caused by earthquake activity although the aseismic landsliding is a relatively frequent agent for tsunami generation in Corinth Gulf. We focus to better understand the process of tsunami generation from earthquakes. To this aim we have considered the elliptical rupture zones of all the strong (M≥ 6.0) historical and instrumental earthquakes known in the Corinth Gulf. We have taken into account rupture zones determined by previous authors. However, magnitudes, M, of historical earthquakes were recalculated from a set of empirical relationships between M and seismic intensity established for earthquakes occurring in Greece during the instrumental era of seismicity. For this application the macroseismic field of each one of the earthquakes was identified and seismic intensities were assigned. Another set of empirical relationships M/L and M/W for instrumentally recorded earthquakes in the Mediterranean region was applied to calculate rupture zone dimensions; where L=rupture zone length, W=rupture zone width. The rupture zones positions were decided on the basis of the localities of the highest seismic intensities and co-seismic ground failures, if any, while the orientation of the maximum

  13. Earthquake Hazard and Risk Assessment based on Unified Scaling Law for Earthquakes: State of Gujarat, India

    NASA Astrophysics Data System (ADS)

    Nekrasova, Anastasia; Kossobokov, Vladimir; Parvez, Imtiyaz

    2016-04-01

    The Gujarat state of India is one of the most seismically active intercontinental regions of the world. Historically, it has experienced many damaging earthquakes including the devastating 1819 Rann of Kutch and 2001 Bhuj earthquakes. The effect of the later one is grossly underestimated by the Global Seismic Hazard Assessment Program (GSHAP). To assess a more adequate earthquake hazard for the state of Gujarat, we apply Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter recurrence relation taking into account naturally fractal distribution of earthquake loci. USLE has evident implications since any estimate of seismic hazard depends on the size of the territory considered and, therefore, may differ dramatically from the actual one when scaled down to the proportion of the area of interest (e.g. of a city) from the enveloping area of investigation. We cross compare the seismic hazard maps compiled for the same standard regular grid 0.2°×0.2° (i) in terms of design ground acceleration (DGA) based on the neo-deterministic approach, (ii) in terms of probabilistic exceedance of peak ground acceleration (PGA) by GSHAP, and (iii) the one resulted from the USLE application. Finally, we present the maps of seismic risks for the state of Gujarat integrating the obtained seismic hazard, population density based on 2011 census data, and a few model assumptions of vulnerability.

  14. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    USGS Publications Warehouse

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M < ??? 3) earthquakes in southern California, the east San Francisco Bay, and the aftershock sequence of the 1989 Loma Prieta earthquake. I quantify the degree of mechanism variability on a range of length scales by comparing the hypocentral disctance between every pair of events and the angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  15. Afterslip Distribution of Large Earthquakes Using Viscoelastic Media

    NASA Astrophysics Data System (ADS)

    Sato, T.; Higuchi, H.

    2009-12-01

    One of important parameters in simulations of earthquake generation is frictional properties of faults. To investigate the frictional properties, many authors studied coseismic slip and afterslip distribution of large plate interface earthquakes using coseismic and post seismic surface deformation by GPS data. Most of these studies used elastic media to get afterslip distribution. However, the effect of viscoelastic relaxation at the asthenosphere is important on post seismic surface deformation (Matsu’ura and Sato, GJI, 1989; Sato and Matsu’ura, GJI, 1992). Therefore, the studies using elastic media did not estimate correct afterslip distribution because they forced the cause of surface deformation on only afterslips at plate interface. We estimate afterslip distribution of large interplate earthquakes using viscoelastic media. We consider not only viscoelastic responses of coseismic slip but also viscoelastic responses of afterslips. Because many studies suggested that the magnitude of afterslips was comparable to that of coseismic slip, viscoelastic responses of afterslips can not be negligible. Therefore, surface displacement data include viscoelastic response of coseismic slip, viscoelastic response of afterslips which occurred just after coseismic period to just before the present, and elastic response of the present afterslip. We estimate afterslip distribution for the 2003 Tokachi-oki earthquake, Hokkaido, Japan using GPS data by GSI, Japan. We use CAMP model (Hashimoto et al, PAGEOPH, 2004) as a plate interface between the Pacific plate and the North American plate. The viscoelastic results show clearer that afterslips distribute on areaes where the coseismic slip does not occur. The viscoelastic results also show that the afterslips concentrate deeper parts of the plate interface at the eastern adjoining area of the 2003 Tokachi-oki earthquake.

  16. The Strain Energy, Seismic Moment and Magnitudes of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2004-12-01

    The strain energy Est, as potential energy, released by an earthquake and the seismic moment Mo are two fundamental physical earthquake parameters. The earthquake rupture process ``represents'' the release of the accumulated Est. The moment Mo, first obtained in 1966 by Aki, revolutioned the quantification of earthquake size and led to the elimination of the limitations of the conventional magnitudes (originally ML, Richter, 1930) mb, Ms, m, MGR. Both Mo and Est, not in a 1-to-1 correspondence, are uniform measures of the size, although Est is presently less accurate than Mo. Est is partitioned in seismic- (Es), fracture- (Eg) and frictional-energy Ef, and Ef is lost as frictional heat energy. The available Est = Es + Eg (Aki and Richards (1980), Kostrov and Das, (1988) for fundamentals on Mo and Est). Related to Mo, Est and Es, several modern magnitudes were defined under various assumptions: the moment magnitude Mw (Kanamori, 1977), strain energy magnitude ME (Purcaru and Berckhemer, 1978), tsunami magnitude Mt (Abe, 1979), mantle magnitude Mm (Okal and Talandier, 1987), seismic energy magnitude Me (Choy and Boatright, 1995, Yanovskaya et al, 1996), body-wave magnitude Mpw (Tsuboi et al, 1998). The available Est = (1/2μ )Δ σ Mo, Δ σ ~=~average stress drop, and ME is % \\[M_E = 2/3(\\log M_o + \\log(\\Delta\\sigma/\\mu)-12.1) ,\\] % and log Est = 11.8 + 1.5 ME. The estimation of Est was modified to include Mo, Δ and μ of predominant high slip zones (asperities) to account for multiple events (Purcaru, 1997): % \\[E_{st} = \\frac{1}{2} \\sum_i {\\frac{1}{\\mu_i} M_{o,i} \\Delta\\sigma_i} , \\sum_i M_{o,i} = M_o \\] % We derived the energy balance of Est, Es and Eg as: % \\[ E_{st}/M_o = (1+e(g,s)) E_s/M_o , e(g,s) = E_g/E_s \\] % We analyzed a set of about 90 large earthquakes and found that, depending on the goal these magnitudes quantify differently the rupture process, thus providing complementary means of earthquake characterization. Results for some

  17. Fast rupture propagation for large strike-slip earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Mori, Jim; Koketsu, Kazuki

    2016-04-01

    Studying rupture speeds of shallow earthquakes is of broad interest because it has a large effect on the strong near-field shaking that causes damage during earthquakes, and it is an important parameter that reflects stress levels and energy on a slipping fault. However, resolving rupture speed is difficult in standard waveform inversion methods due to limited near-field observations and the tradeoff between rupture speed and fault size for teleseismic observations. Here we applied back-projection methods to estimate the rupture speeds of 15 Mw ≥ 7.8 dip-slip and 8 Mw ≥ 7.5 strike-slip earthquakes for which direct P waves are well recorded in Japan on Hi-net, or in North America on USArray. We found that all strike-slip events had very fast average rupture speeds of 3.0-5.0 km/s, which are near or greater than the local shear wave velocity (supershear). These values are faster than for thrust and normal faulting earthquakes that generally rupture with speeds of 1.0-3.0 km/s.

  18. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  19. Low-frequency source parameters of twelve large earthquakes

    NASA Astrophysics Data System (ADS)

    Harabaglia, Paolo

    1993-06-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  20. Analysis concepts for large telescope structures under earthquake load

    NASA Astrophysics Data System (ADS)

    Koch, Franz

    1997-03-01

    The very large telescope (VLT) of ESO will be placed on Cerro Paranal in the Atacama desert in northern Chile. This site provides excellent conditions for astronomical observations. However, it is likely that important seismic activities occur. The telescope structure and its components have to resist the largest earthquakes expected during their lifetime. Therefore, design specifications and structural analyses have to take into account loads caused by such earthquakes. The present contribution shows some concepts and techniques in the assessment of earthquake resistant telescope design by the finite element method (FEM). After establishing the general design criteria and the geological and geotechnical characteristics of the site location, the seismic action can be defined. A description of various representations of the seismic action and the procedure to define the commonly used response spectrum are presented in more detail. A brief description of the response spectrum analysis method and of the result evaluation procedure follows. Additionally, some calculation concepts for parts of the entire telescope structure under seismic loads are provided. Finally, a response spectrum analysis of the entire VLT structure performed at ESO is presented to show a practical application of the analysis method and evaluation procedure mentioned above.

  1. Premonitory patterns of seismicity months before a large earthquake: Five case histories in Southern California

    PubMed Central

    Keilis-Borok, V. I.; Shebalin, P. N.; Zaliapin, I. V.

    2002-01-01

    This article explores the problem of short-term earthquake prediction based on spatio-temporal variations of seismicity. Previous approaches to this problem have used precursory seismicity patterns that precede large earthquakes with “intermediate” lead times of years. Examples include increases of earthquake correlation range and increases of seismic activity. Here, we look for a renormalization of these patterns that would reduce the predictive lead time from years to months. We demonstrate a combination of renormalized patterns that preceded within 1–7 months five large (M ≥ 6.4) strike-slip earthquakes in southeastern California since 1960. An algorithm for short-term prediction is formulated. The algorithm is self-adapting to the level of seismicity: it can be transferred without readaptation from earthquake to earthquake and from area to area. Exhaustive retrospective tests show that the algorithm is stable to variations of its adjustable elements. This finding encourages further tests in other regions. The final test, as always, should be advance prediction. The suggested algorithm has a simple qualitative interpretation in terms of deformations around a soon-to-break fault: the blocks surrounding that fault began to move as a whole. A more general interpretation comes from the phenomenon of self-similarity since our premonitory patterns retain their predictive power after renormalization to smaller spatial and temporal scales. The suggested algorithm is designed to provide a short-term approximation to an intermediate-term prediction. It remains unclear whether it could be used independently. It seems worthwhile to explore similar renormalizations for other premonitory seismicity patterns. PMID:12482945

  2. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  3. Slow earthquakes with duration of about 100 s suggested by the scaling law

    NASA Astrophysics Data System (ADS)

    Ide, S.; Imanishi, K.; Yoshida, Y.

    2007-12-01

    Slow earthquakes in western Japan are considered as a group of interplate slip events that obey the scaling law proposed by Ide et al. (2007), in which the seismic moment is proportional to the event duration. However, the population of events in this group is not continuous. In the Nankai slow earthquake zone, we have found deep low-frequency earthquakes (LFE) below 1 s, very low-frequency earthquakes (VLF, Ito et al., 2006) between 20-50 s, and slow slip events (SSE) above a few days. Are there any slow events other than these? If a slow earthquake that satisfies the scaling relation with duration of about 100 s occurs within the Nankai slow earthquake zone, it is observable at low-noise stations with a vertical broadband sensor only if they are located near the maximum direction of the near-field signal. One station that satisfies these conditions is F-net KIS with STS-1 seismometers, maintained by National Research Institute for Earth Science and Disaster Prevention, Japan. This station records tremor activities a few times per year since 1996. During most of the activities, we can detect many large low-frequency signals. Longer events include VLFs and we can show that some previously reported VLFs are actually a part of a longer event. We installed a temporary observation station at 15 km from KIS station and recorded a sequence of low frequency tremor for July 17-20, 2007. Although the low frequency signals are visible at two stations, the amplitudes are quite different, which suggests that we can determine the location and orientation of the source using a small dense array of broadband seismometers. As expected, the moment magnitudes of 100 s events are around 4, which satisfy the scaling relation of slow earthquake. Existence of much larger and longer events is implied from the records of KIS, although large low- frequency noise less than 3 mHz impedes reliable judgments. The existence of such events suggests that any size of slow earthquakes may occur

  4. Earthquake magnitude calculation without saturation from the scaling of peak ground displacement

    NASA Astrophysics Data System (ADS)

    Melgar, Diego; Crowell, Brendan W.; Geng, Jianghui; Allen, Richard M.; Bock, Yehuda; Riquelme, Sebastian; Hill, Emma M.; Protti, Marino; Ganas, Athanassios

    2015-07-01

    GPS instruments are noninertial and directly measure displacements with respect to a global reference frame, while inertial sensors are affected by systematic offsets—primarily tilting—that adversely impact integration to displacement. We study the magnitude scaling properties of peak ground displacement (PGD) from high-rate GPS networks at near-source to regional distances (~10-1000 km), from earthquakes between Mw6 and 9. We conclude that real-time GPS seismic waveforms can be used to rapidly determine magnitude, typically within the first minute of rupture initiation and in many cases before the rupture is complete. While slower than earthquake early warning methods that rely on the first few seconds of P wave arrival, our approach does not suffer from the saturation effects experienced with seismic sensors at large magnitudes. Rapid magnitude estimation is useful for generating rapid earthquake source models, tsunami prediction, and ground motion studies that require accurate information on long-period displacements.

  5. Failure of self-similarity for large (Mw > 81/4) earthquakes.

    USGS Publications Warehouse

    Hartzell, S.H.; Heaton, T.H.

    1988-01-01

    Compares teleseismic P-wave records for earthquakes in the magnitude range from 6.0-9.5 with synthetics for a self-similar, omega 2 source model and conclude that the energy radiated by very large earthquakes (Mw > 81/4) is not self-similar to that radiated from smaller earthquakes (Mw < 81/4). Furthermore, in the period band from 2 sec to several tens of seconds, it is concluded that large subduction earthquakes have an average spectral decay rate of omega -1.5. This spectral decay rate is consistent with a previously noted tendency of the omega 2 model to overestimate Ms for large earthquakes.-Authors

  6. The Energetics of Large Shallow and Deep Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2002-05-01

    Large earthquakes occur mostly as complex processes with inhomgeneities of variable strength and size, and also in different tectonic regimes. As a result, the released strain energy (Est) can significantly vary in individual events with about the same seismic moment M0, i.e. no 1-to-1 correspondence between them. We quantify the energetic balance of earthquakes in terms of Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) for 75 large earthquakes (Mw >= 7) with more accurate source parameters. Based on an extended Hamilton's principle, which considers nonconservative forces and any forces not accounted for in the potential energy function, and assuming complete stress drop we estimate Est using the approach of Purcaru (EOS, 1997, 78, 481). The events are from thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental classes. The energetic balance is determined from: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, and Est and Es are not in a 1-to-1 correspondence. In the Est-budget: (1) larger Es (i.e. more energetic) is radiated by deep, in-slab, strike slip and some continental events, (2) the interplate thrust events in subduction zones show a relatively balanced partition of Es and Eg and (3) small Es, and much larger Eg, is found for slow events and tsunamis. A reference class for which Es and Eg are comparable is suggested. Our results are consistent with those of other authors (Kikuchi, 1992; Choy and Boatwright, 1995; Newman and Okal, 1998). The average stress drop shows significant variability, even for events from the thrust class. The strong variation of stress drop of localized regions of the rupture area we found to play a major role in partitioning the released strain energy.

  7. Earthquake Apparent Stress Scaling for the 1999 Hector Mine Sequence

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Mayeda, K.

    2003-12-01

    There is currently a disagreement within the geophysical community on the way earthquake energy scales with magnitude. One set of studies finds evidence that energy release per seismic moment (apparent stress) is constant (e.g. Choy and Boatwright, 1995; McGarr, 1999; Ide and Beroza, 2001). Other studies find the apparent stress increases with magnitude (e.g. Kanamori et al., 1993; Abercrombie, 1995; Mayeda and Walter, 1996; Izutani and Kanamori, 2001). The resolution of this issue is complicated by the difficulty of accurately accounting for attenuation, radiation inhomogeneities, bandwidth and determining the seismic energy radiated by earthquakes over a wide range of event sizes in a consistent manner. We try to improve upon earlier results by using consistent techniques over common paths for a wide range of sizes and seismic phases. We have examined about 130 earthquakes from the Hector Mine earthquake sequence in Southern California. These earthquakes range in size from the October 16,1999 Mw=7.1 mainshock down to ML=3.0 aftershocks into 2000. The mainshock has unclipped Pg and Lg phases at a number of high quality regional stations (e.g. CMB, ELK, TUC) where we can use the common path to examine apparent stress scaling relations directly. We are careful to avoid any event selection bias that would be related to apparent stress values. We fix each stations path correction using the independent moment and energy estimates for the mainshock. We then use those corrections to determine the seismic energy for each event based on regional Lg spectra. We use a modeling technique (MDAC) based on a modified Brune (1970) spectral shape but without any assumptions of corner-frequency scaling (Walter and Taylor, 2002). We perform similar analysis using the Pg spectra. We find the energy estimates for the same events are consistent for Lg estimates, Pg estimates and the estimates using the independent regional coda envelope technique (Mayeda and Walter, 1996; Mayeda et al

  8. Climate Regime Controls Fluvial Evacuation of Sediment Mobilized by Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, J.; Jin, Z.; Hilton, R. G.; Zhang, F.; Densmore, A. L.; Li, G.; West, A. J.

    2014-12-01

    Large earthquakes in active mountain belts can trigger landslides which mobilize large volumes of clastic sediment. Delivery of this material to river channels may result in aggradation and flooding, while sediment residing on hillslopes may increase the likelihood of subsequent landslides and debris flows. Despite this recognition, the controls on the residence time of coseismic landslide sediment in river catchments remain poorly understood. Here we assess the residence time of fine-grained (<0.25 mm) landslide sediment mobilized by the 2008 Mw 7.9 Wenchuan earthquake, China, using suspended sediment fluxes measured in 16 river catchments from 2006-2012. Following the earthquake, suspended sediment flux was elevated 3 to 7 times, consistent with observations of dilution of 10Be concentrations in detrital quartz (West et al., 2014). However, the total 2008-2012 export was much less than input of fine-grained sediment by coseismic landslides determined by area-volume scaling and deposit grain-size distributions. Estimates of the residence time of fine-grained sediment in the affected river catchments range from <1 to >100 years at the present export rate. We show that the residence time is proportional to the extent of coseismic landsliding, and inversely proportional to the frequency of intense runoff events. Together with previously reported observations from the 1999 Chi-Chi earthquake in Taiwan, our results demonstrate the importance of climate in setting the length of time that river systems are impacted by large earthquakes. References: West et al., 2014, Earth Planet Sc. Lett., 396, 143-153.

  9. Recurrence time distributions of large earthquakes in conceptual model studies

    NASA Astrophysics Data System (ADS)

    Zoeller, G.; Hainzl, S.

    2007-12-01

    The recurrence time distribution of large earthquakes in seismically active regions is a crucial ingredient for seismic hazard assessment. However, due to sparse observational data and a lack of knowledge on the precise mechanisms controlling seismicity, this distribution is unknown. In many practical applications of seismic hazard assessment, the Brownian passage time (BPT) distribution (or a different distribution) is fitted to a small number of observational recurrence times. Here, we study various aspects of recurrence time distributions in conceptual models of individual faults and fault networks: First, the dependence of the recurrence time distribution on the fault interaction is investigated by means of a network of Brownian relaxation oscillators. Second, the Brownian relaxation oscillator is modified towards a model for large earthquakes, taking into account also the statistics of intermediate events in a more appropriate way. This model simulates seismicity in a fault zone consisting of a major fault and some surrounding smaller faults with Gutenberg-Richter type seismicity. This model can be used for more realistic and robust estimations of the real recurrence time distribution in seismic hazard assessment.

  10. Estimating Casualties for Large Earthquakes Worldwide Using an Empirical Approach

    USGS Publications Warehouse

    Jaiswal, Kishor; Wald, David J.; Hearne, Mike

    2009-01-01

    We developed an empirical country- and region-specific earthquake vulnerability model to be used as a candidate for post-earthquake fatality estimation by the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) system. The earthquake fatality rate is based on past fatal earthquakes (earthquakes causing one or more deaths) in individual countries where at least four fatal earthquakes occurred during the catalog period (since 1973). Because only a few dozen countries have experienced four or more fatal earthquakes since 1973, we propose a new global regionalization scheme based on idealization of countries that are expected to have similar susceptibility to future earthquake losses given the existing building stock, its vulnerability, and other socioeconomic characteristics. The fatality estimates obtained using an empirical country- or region-specific model will be used along with other selected engineering risk-based loss models for generation of automated earthquake alerts. These alerts could potentially benefit the rapid-earthquake-response agencies and governments for better response to reduce earthquake fatalities. Fatality estimates are also useful to stimulate earthquake preparedness planning and disaster mitigation. The proposed model has several advantages as compared with other candidate methods, and the country- or region-specific fatality rates can be readily updated when new data become available.

  11. Earthquake!

    ERIC Educational Resources Information Center

    Markle, Sandra

    1987-01-01

    A learning unit about earthquakes includes activities for primary grade students, including making inferences and defining operationally. Task cards are included for independent study on earthquake maps and earthquake measuring. (CB)

  12. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a ...

  13. Earthquakes

    MedlinePlus

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  14. Earthquake source parameters and scaling relationships from microseismicity at TauTona Gold Mine, South Africa

    NASA Astrophysics Data System (ADS)

    Moyer, P. A.; Boettcher, M. S.

    2012-12-01

    The issue of earthquake source scaling continues to draw considerable debate within the seismological community. Findings that both support and refute the claim that systematic differences between the source processes of small and large earthquakes may exist, motivate the study of how source parameters, such as seismic moment, corner frequency, radiated seismic energy, and apparent stress, scale over a wide range of magnitudes. To address this question, we are conducting a compressive examination of earthquake source parameters from microseismicity recorded at the TauTona gold mine in South Africa. At the TauTona gold mine, hundreds to thousands of earthquakes are recorded everyday within a few meters to kilometers of seismometers installed at depth throughout the mine. This high-rate of seismicity and close proximity to the recording instruments provides the ideal location and dataset to investigate source parameters and scaling relationships for earthquakes with a wide magnitude range of -4 < Mw < 4. We focus our investigation on earthquakes recorded during mining quiet hours to minimize blasts and rockburts in our catalog, and focus on earthquakes that occurred along the Pretorious Fault, the largest fault system running through the mine, to evaluate source parameters of fault zone earthquakes. The mine seismic network operated by the Institute of Mine Seismology (IMS) with a sample rate range of 3 - 2000 Hz has been enhanced by a tight array of high-quality instruments deployed in the Pretorious Fault Zone at the deepest part of the mine (~3.6 km depth) as part of the Natural Laboratory in South African Mines (NELSAM). The NELSAM network includes 3 strong-motion accelerometers, 5 weak-motion accelerometers, and 3 geophones with a combined sample rate range of 6 - 12 kHz that allows us to reliably constrain corner frequencies of very small earthquakes. We use spectral analysis techniques and an omega-squared source model determined by an Empirical Green

  15. Source Parameters of Large Magnitude Subduction Zone Earthquakes Along Oaxaca, Mexico

    NASA Astrophysics Data System (ADS)

    Fannon, M. L.; Bilek, S. L.

    2014-12-01

    Subduction zones are host to temporally and spatially varying seismogenic activity including, megathrust earthquakes, slow slip events (SSE), nonvolcanic tremor (NVT), and ultra-slow velocity layers (USL). We explore these variations by determining source parameters for large earthquakes (M > 5.5) along the Oaxaca segment of the Mexico subduction zone, an area encompasses the wide range of activity noted above. We use waveform data for 36 earthquakes that occurred between January 1, 1990 to June 1, 2014, obtained from the IRIS DMC, generate synthetic Green's functions for the available stations, and deconvolve these from the ­­­observed records to determine a source time function for each event. From these source time functions, we measured rupture durations and scaled these by the cube root to calculate the normalized duration for each event. Within our dataset, four events located updip from the SSE, USL, and NVT areas have longer rupture durations than the other events in this analysis. Two of these four events, along with one other event, are located within the SSE and NVT areas. The results in this study show that large earthquakes just updip from SSE and NVT have slower rupture characteristics than other events along the subduction zone not adjacent to SSE, USL, and NVT zones. Based on our results, we suggest a transitional zone for the seismic behavior rather than a distinct change at a particular depth. This study will help aid in understanding seismogenic behavior that occurs along subduction zones and the rupture characteristics of earthquakes near areas of slow slip processes.

  16. Earthquake Source Scaling and Wave Propagation in Eastern North America: The Au Sable Forks, NY, Earthquake

    NASA Astrophysics Data System (ADS)

    Viegas, G.; Abercrombie, R.; Baise, L.; Kim, W.

    2005-12-01

    The 2002, M5 Au Sable Forks, NY earthquake and aftershocks are the best recorded sequence in the North Eastern USA. We use the local and regional recordings to investigate the characteristics of intraplate seismicity, focusing on source scaling relationships and regional wave propagation. A portable local network of 11 stations, recorded 74 aftershocks of M<3.2. We relocate the mainshock and early aftershocks using a master event technique. We then use the double-difference relocation method using differential travel times measured from waveform cross-correlation to relocate the aftershocks recorded by the local network. Both the master-event and double-difference location methods produce consistent results suggesting complex conjugate faulting during the sequence. We identify a number of highly clustered groups of earthquakes suitable for EGF analysis. We use the EGF method to calculate the stress drop and radiated energy of the larger aftershocks to determine how they compare to moderate magnitude earthquakes, and also whether they differ significantly from interplate earthquakes. We consider the 9 largest aftershocks (M3.7 to M2), which were recorded on the regional network, as potential EGFs for the mainshock, but they have focal mechanisms and locations that are sufficiently different that we cannot resolve the mainshock source time function well. They are good enough to enable us to place constraints on the shape and duration of the source pulse to use in modeling the regional waveforms. We investigate the crustal structure in New York (Grenville) and New England (Appalachian) through forward modeling of the Au Sable Forks regional broadband records. We compute synthetic records of wave propagation in a layered medium, using published crustal models of the two regions as a starting point. We identify differences between the recorded data and synthetics for the Grenville and the Appalachian regions and improve the crustal models to better fit the recorded

  17. The Evolution of Regional Seismicity Between Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bowman, D.; King, G.

    We describe a simple model that links static stress (Coulomb) modeling to the re- gional seismicity around a major fault. Unlike conventional Coulomb stress tech- niques, which calculate stress changes, we model the evolution of the stress field rela- tive to the failure stress. Background seismicity is attributed to inhomogeneities in the stress field which are created by adding a random field that creates local regions above the failure stress. The inhomogeneous field is chosen such that when these patches fail, the resulting earthquake size distribution follows a Gutenburg-Richter law. Im- mediately following a large event, the model produces regions of increased seismicity where the overall stress field has been elevated (aftershocks) and regions of reduced seismicity where the stress field has been reduced (stress shadows). The high stress levels in the aftershock regions decrease due to loading following the main event. Combined with the stress shadow from the main event, this results in a broad seismi- cally quiet region of lowered stress around the epicenter. Pre-event seismicity appears as the original stress shadows finally fill as a result of loading. The increase in seismic- ity initially occurs several fault lengths away from the main fault and moves inward as the event approaches. As a result of this effect, the seismic moment release in the region around the future epicenter increases as the event approaches. Synthetic cat- alogues generated by this model are virtually indistinguishable from real earthquake sequences in California and Washington.

  18. Earthquake Monitoring at Different Scales with Seiscomp3

    NASA Astrophysics Data System (ADS)

    Grunberg, M.; Engels, F.

    2013-12-01

    In the last few years, the French National Network of Seismic Survey (BCSF-RENASS) had to modernize its old and aging earthquake monitoring system coming from an inhouse developement. After having tried and conducted intensive tests on several real time frameworks such as EarthWorm and Seiscomp3 we have finaly adopted in 2012 Seiscomp3. Our actual system runs with two pipelines in parallel: the first one is tuned at a global scale to monitor the world seismicity (for event's magnitude > 5.5) and the second one is tuned at a national scale for the monitoring of the metropolitan France. The seismological stations used for the "world" pipeline are coming mainly from Global Seismographic Network (GSN), whereas for the "national" pipeline the stations are coming from the RENASS short period network and from the RESIF broadband network. More recently we have started to tune seiscomp3 at a smaller scale to monitor in real time the geothermal project (a R&D program in Deep Geothermal Energy) in the North-East part of France. Beside the use of the real time monitoring capabilities of Seiscomp3 we have also used a very handy feature to playback a 4 month length dataset at a local scale for the Rambervillers earthquake (22/02/2003, Ml=5.4) leading to the build of roughly 2000 aftershock's detections and localisations.

  19. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  20. Stress drop and source scaling of the 2009 April L'Aquila earthquakes

    NASA Astrophysics Data System (ADS)

    Calderoni, Giovanna; Rovelli, Antonio; Singh, Shri Krishna

    2013-01-01

    The empirical Green's function (EGF) technique is applied in the frequency domain to 962 broad-band seismograms (3.3 ≤ MW ≤ 6.1) to determine stress drop and source scaling of the 2009 April L'Aquila earthquakes. The station distance varies in the range 100-250 km from the source. Ground motions of several L'Aquila earthquakes are characterized by large azimuthal variations due to source directivity, even at low magnitudes. Thus, the individual-station stress-drop estimates are significantly biased when source directivity is not taken into account properly. To reduce the bias, we use single-station spectral ratios with pairs of earthquakes showing a similar degree of source directivity. The superiority of constant versus varying stress-drop models is assessed through minimization of misfit in a least-mean-square sense. For this analysis, seismograms of 26 earthquakes occurring within 10 km from the hypocentres of the three strongest shocks are used. We find that a source model where stress drop increases with the earthquake size has the minimum misfit: as compared to the best constant stress-drop model the improvement in the fit is of the order of 40 per cent. We also estimate the stress-drop scaling on a larger data set of 64 earthquakes, all of them having an independent estimate of seismic moment and consistent focal mechanism. An earthquake which shows no directivity is chosen as EGF event. This analysis confirms the former trend and yields individual-event stress drops very close to 10 MPa at magnitudes MW > 4.5 that decrease to 1 MPa, on the average, at the smallest magnitudes. A varying stress-drop scaling of L'Aquila earthquakes is consistent with results from other studies using EGF techniques but contrasts with results of authors that used inversion techniques to separate source from site and propagation effects. We find that there is a systematic difference for small events between the results of the two methods, with lower and less scattered values

  1. Earthquakes.

    ERIC Educational Resources Information Center

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  2. Using DART-recorded Rayleigh waves for rapid CMT and finite fault analyses of large megathrust earthquakes.

    NASA Astrophysics Data System (ADS)

    Thio, H. K.; Polet, J.; Ryan, K. J.

    2015-12-01

    We study the use of long-period Rayleigh waves recorded by the DART-type ocean bottom pressure sensors. The determination of accurate moment and slip distribution after a megathrust subduction zone earthquake is essential for tsunami early warning. The two main reasons why the DART data are o interest to this problem are; 1 - contrary to the broadband data used in the early stages of earthquake analysis, the DART data do not saturate for large magnitude earthquakes, and 2 - DART stations are located offshore and thus often fill gaps in the instrumental coverage at local and regional distances. Thus, by including DART recorded Rayleigh waves into the rapid response systems we may be able to gain valuable time in determining accurate moment estimates and slip distributions needed for tsunami warning and other rapid response products. Large megathrust earthquakes are among the most destructive natural disasters in history but also pose a significant challenge real-time analysis. The scales involved in such large earthquakes, with ruptures as long as a thousand kilometers and durations of several minutes are formidable. There are still issues with rapid analysis at the short timescales, such as minutes after the event since many of the nearby seismic stations will saturate due to the large ground motions. Also, on the seaward side of megathrust earthquakes, the nearest seismic stations are often thousands of kilometers away on oceanic islands. The deployment of DART buoys can fill this gap, since these instruments do not saturate and are located close in on the seaward side of the megathrusts. We are evaluating the use of DART-recorded Rayleigh waves, by including them in the dataset used for Centroid Moment Tensor analyses, and by using the near-field DART stations to constrain source finiteness for megathrust earthquakes such as the recent Tohoku, Haida Gwaii and Chile earthquakes.

  3. Local near instantaneously dynamically triggered aftershocks of large earthquakes.

    PubMed

    Fan, Wenyuan; Shearer, Peter M

    2016-09-01

    Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks. PMID:27609887

  4. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models.

    PubMed

    Landes, François P; Lippiello, E

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics. PMID:27300821

  5. Scaling laws in earthquake occurrence: Disorder, viscosity, and finite size effects in Olami-Feder-Christensen models

    NASA Astrophysics Data System (ADS)

    Landes, François P.; Lippiello, E.

    2016-05-01

    The relation between seismic moment and fractured area is crucial to earthquake hazard analysis. Experimental catalogs show multiple scaling behaviors, with some controversy concerning the exponent value in the large earthquake regime. Here, we show that the original Olami, Feder, and Christensen model does not capture experimental findings. Taking into account heterogeneous friction, the viscoelastic nature of faults, together with finite size effects, we are able to reproduce the different scaling regimes of field observations. We provide an explanation for the origin of the two crossovers between scaling regimes, which are shown to be controlled both by the geometry and the bulk dynamics.

  6. Spectral scaling of the aftershocks of the Tocopilla 2007 earthquake in northern Chile

    NASA Astrophysics Data System (ADS)

    Lancieri, M.; Madariaga, R.; Bonilla, F.

    2012-04-01

    We study the scaling of spectral properties of a set of 68 aftershocks of the 2007 November 14 Tocopilla (M 7.8) earthquake in northern Chile. These are all subduction events with similar reverse faulting focal mechanism that were recorded by a homogenous network of continuously recording strong motion instruments. The seismic moment and the corner frequency are obtained assuming that the aftershocks satisfy an inverse omega-square spectral decay; radiated energy is computed integrating the square velocity spectrum corrected for attenuation at high frequencies and for the finite bandwidth effect. Using a graphical approach, we test the scaling of seismic spectrum, and the scale invariance of the apparent stress drop with the earthquake size. To test whether the Tocopilla aftershocks scale with a single parameter, we introduce a non-dimensional number, ?, that should be constant if earthquakes are self-similar. For the Tocopilla aftershocks, Cr varies by a factor of 2. More interestingly, Cr for the aftershocks is close to 2, the value that is expected for events that are approximately modelled by a circular crack. Thus, in spite of obvious differences in waveforms, the aftershocks of the Tocopilla earthquake are self-similar. The main shock is different because its records contain large near-field waves. Finally, we investigate the scaling of energy release rate, Gc, with the slip. We estimated Gc from our previous estimates of the source parameters, assuming a simple circular crack model. We find that Gc values scale with the slip, and are in good agreement with those found by Abercrombie and Rice for the Northridge aftershocks.

  7. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  8. Earthquake triggering by slow earthquake propagation: the case of the large 2014 slow slip event in Guerrero, Mexico.

    NASA Astrophysics Data System (ADS)

    Radiguet, M.; Perfettini, H.; Cotte, N.; Gualandi, A.; Kostoglodov, V.; Lhomme, T.; Walpersdorf, A.; Campillo, M.; Valette, B.

    2015-12-01

    Since their discovery nearly two decades ago, the importance of slow slip events (SSEs) in the processes of strain accommodation in subduction zones has been revealed. Nevertheless, the influence of slow aseismic slip on the nucleation of large earthquakes remains unclear. In this study, we focus on the Guerrero region of the Central American subduction zone in Mexico, where large SSEs have been observed since 1998, with a recurrence period of about 4 years, and produce aseismic slip in the Guerrero seismic gap. We investigate the large 2014 SSE (equivalent Mw=7.7), which initiated in early 2014 and lasted until the end of October 2014. During this time period, the 18 April Papanoa earthquake (Mw7.2) occurred on the western limit of the Guerrero gap. We invert the continuous GPS time series using the PCAIM (Principal Component Analysis Inversion Method) to assess the space and time evolution of slip on the subduction. To focus on the aseismic processes, we correct the cGPS time series from the co-seismic offsets. Our results show that the slow slip event initiated in the Guerrero gap region, as already observed during the previous SSEs. The Mw7.2 Papanoa earthquake occurred on the western limit of the region that was slipping aseismically before the earthquake. After the Papanoa earthquake, the aseismic slip rate increases. This geodetic signal consists of both the ongoing SSE and the postseismic (afterslip) response due to the Papanoa earthquake. The majority of the post-earthquake aseismic slip is concentrated downdip from the main earthquake asperity, but significant slip is also observed in the Guerrero gap region. Compared to previous SSEs in that region, the 2014 SSE produced a larger aseismic slip and the maximum slip is located downdip from the main brittle asperity corresponding to the Papanoa earthquake, a region that was not identified as active during the previous SSEs. Since the Mw 7.2 Papanoa earthquake occurred about 2 months after the onset of the

  9. Source Scaling and Ground Motion of the 2008 Wells, Nevada, earthquake sequence

    NASA Astrophysics Data System (ADS)

    Yoo, S.; Dreger, D. S.; Mayeda, K. M.; Walter, W. R.

    2011-12-01

    Dynamic source parameters, such as a corner frequency, stress drop, and radiated energy, are one of the most critical factors controlling ground motions at higher-frequencies (generally greater than 1 Hz), which may cause damage to nearby surface structures. Hence, scaling relation of these parameters can play an important role in assessing the seismic hazard for regions in which records of ground motions from potentially damaging earthquakes are not available. On February 21, 2008 at 14:16 (UTC), a magnitude 6 earthquake occurred near Wells, Nevada, where characterized by low rate of seismicity. For their aftershocks, a marked discrepancy between the observed and predicted ground motions from empirical ground motion prediction equation was reported (Petersen et al., 2011). To evaluate and understand these observed ground motions, we investigate the dynamic source parameters and their scaling relation for this earthquake sequence. We estimate the source parameters of the earthquakes using the coda spectral ratio method (Mayeda et al., 2007) and examine the estimates with the observed spectral accelerations at higher frequencies. From the derived source parameters and scaling relation, we compute synthetic ground motions of the earthquakes using fractal composite source model (e.g., Zeng et al., 1994) and compare these synthetic ground motions with the observed ground motions and synthetic ground motions obtained from self-similar source scaling relation. In our preliminary results, we find the stress drops of the aftershocks are systematically 2-5 times lower than a stress drop of the mainshock. This agrees well with systematic overestimation of the predicted ground motions for the aftershocks. The simulated ground motions from the coda-derived scaling relation better explains the observed both weak and strong ground motions than that of from the size independent stress drop scaling relation. Assuming that the scale dependent stress drop is real, at least in some

  10. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  11. A bilinear source-scaling model for M-log a observations of continental earthquakes

    USGS Publications Warehouse

    Hanks, T.C.; Bakun, W.H.

    2002-01-01

    The Wells and Coppersmith (1994) M-log A data set for continental earthquakes (where M is moment magnitude and A is fault area) and the regression lines derived from it are widely used in seismic hazard analysis for estimating M, given A. Their relations are well determined, whether for the full data set of all mechanism types or for the subset of strike-slip earthquakes. Because the coefficient of the log A term is essentially 1 in both their relations, they are equivalent to constant stress-drop scaling, at least for M ??? 7, where most of the data lie. For M > 7, however, both relations increasingly underestimate the observations with increasing M. This feature, at least for strike-slip earthquakes, is strongly suggestive of L-model scaling at large M. Using constant stress-drop scaling (???? = 26.7 bars) for M ??? 6.63 and L-model scaling (average fault slip u?? = ??L, where L is fault length and ?? = 2.19 × 10-5) at larger M, we obtain the relations M = log A + 3.98 ?? 0.03, A ??? 537 km2 and M = 4/3 log A + 3.07 ?? 0.04, A > 537 km2. These prediction equations of our bilinear model fit the Wells and Coppersmith (1994) data set well in their respective ranges of validity, the transition magnitude corresponding to A = 537 km2 being M = 6.71.

  12. A scaling relationship between AE and natural earthquakes

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, N.; Kawakata, H.; Takahashi, N.

    2013-12-01

    seismic moments and the corner frequencies by grid search. The magnitude of AE events were estimated between -8 to -7. As a result, the relationship between the seismic moment and the corner frequency of AE also satisfied the same scaling relationship as shown for natural earthquakes. This indicates that AE in rock samples can be regarded as micro size earthquake. This finding shows the possibility to understand the developing processes of natural earthquake from laboratory experiments.

  13. Unusual behaviour of cows prior to a large earthquake

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano; Freund, Friedemann; Grant, Rachel

    2013-04-01

    Unusual behaviour of domestic cattle before earthquakes has been reported for centuries, and often relates to cattle becoming excited, vocal, aggressive or attempting to break free of tethers and restraints. Cattle have also been reported to move to higher or lower ground before earthquakes. Here, we report unusual movements of domestic cows 2 days prior to the Marche-Umbria (M=6) earthquake in 1997. Cows moved down from their usual summer pastures in the hills and were seen in the streets of a nearby town, a highly unusual occurrence. We discuss this in the context of positive holes and air ionisation as proposed by Freund's unified theory of earthquake precursors.

  14. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe.

    PubMed

    duPont, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the 'permanent' socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual--i.e., the Kobe economy without the earthquake--we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  15. Nucleation of Laboratory Earthquakes: Observation, Characterization, and Scaling up to the Natural Earthquakes Dimensions

    NASA Astrophysics Data System (ADS)

    Latour, S.; Schubnel, A.; Nielsen, S. B.; Madariaga, R. I.; Vinciguerra, S.

    2013-12-01

    In this work we observe the nucleation phase of in-plane ruptures in the laboratory and characterize its dynamics. We use a laboratory toy-model, where mode II shear ruptures are produced on a pre-cut fault in a plate of polycarbonate. The fault is cut at the critical angle that allows a stick-slip behavior under uniaxal loading. The ruptures are thus naturally nucleated. The material is birefringent under stress, so that the rupture propagation can be followed by ultra-rapid elastophotometry. A network of acoustic sensors and accelerometers is disposed on the plate to measure the radiated wavefield and record laboratory near-field accelograms. The far field stress level is also measured using strain gages. We show that the nucleation is composed of two distinct phases, a quasi-static and an acceleration stage, followed by dynamic propagation. We propose an empirical model which describes the rupture length evolution: the quasi-static phase is described by an exponential growth while the acceleration phase is described by an inverse power law of time. The transition from quasistatic to accelerating rupture is related to the critical nucleation length, which scales inversely with normal stress in accordance with theoretical predictions, and to a critical surfacic power, which may be an intrinsic property of the interface. Finally, we discuss these results in the frame of previous studies and propose a scaling up to natural earthquake dimensions. Three spontaneously nucleated laboratory earthquakes at increasingly higher normal pre-stresses, visualized by photo-elasticity. The red curves highlight the position of rupture tips as a function of time. We propose an empirical model that describes the dynamics of rupture nucleation and discuss its scaling with the initial normal stress.

  16. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  17. A search for long-term periodicities in large earthquakes of southern and coastal central California

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1990-01-01

    It has been occasionally suggested that large earthquakes may follow the 8.85-year and 18.6-year lunar-solar tidal cycles and possibly the approximately 11-year solar activity cycle. From a new study of earthquakes with magnitudes greater than 5.5 in southern and coastal central California during the years 1855-1983, it is concluded that, at least in this selected area of the world, no statistically significant long-term periodicities in earthquake frequency occur. The sample size used is about twice that used in comparable earlier studies of this region, which concentrated on large earthquakes.

  18. Dike intrusions during rifting episodes obey scaling relationships similar to earthquakes

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Rivalta, Eleonora; Shuler, Ashley

    2014-05-01

    Rifting episodes accommodate the relative motion of mature divergent plate boundaries with sequences of magma-filled dikes that compensate for the missing volume due to crustal splitting. Two major rifting episodes have been recorded since modern monitoring techniques are available: the 1975-1984 Krafla (Iceland) and the 2005-2010 Manda-Hararo (Ethiopia) dike sequences. The statistical properties of the frequency of dike intrusions during rifting have never been investigated in detail, but it has been suggested that they may have similarities with earthquake mainshock-aftershock sequences, for example they start with a large intrusion followed by several events of smaller magnitude. The scaling relationships of earthquakes have on the contrary been widely investigated: earthquakes have been found to follow a power law, the Gutenberg-Richter relation, from local to global scale, while the decay of aftershocks with time has been found to follow the Omori law. These statistical laws for earthquakes are the basis for hazard evaluation and the physical mechanisms behind them are the object of wide interest and debate. Here we investigate in detail the statistics of dikes from the Krafla and Manda-Hararo rifting episodes, including their frequency-magnitude distribution, the release of geodetic moment in time, the correlation between interevent times and intruded volumes. We find that the dimensions of dike intrusions obey a power law analogous to the Gutenberg-Richter relation, the long-term release of geodetic moment is governed by a relationship consistent with the Omori law, and the intrusions are roughly time-predictable. The need of magma availability affects however the timing of secondary dike intrusions: such timing is longer after large volume intrusions, contrarily to aftershock sequences where interevent times shorten after large events.

  19. Oceanic transform fault earthquake nucleation process and source scaling relations - A numerical modeling study with rate-state friction (Invited)

    NASA Astrophysics Data System (ADS)

    Liu, Y.; McGuire, J. J.; Behn, M. D.

    2013-12-01

    We use a three-dimensional strike-slip fault model in the framework of rate and state-dependent friction to investigate earthquake behavior and scaling relations on oceanic transform faults (OTFs). Gabbro friction data under hydrothermal conditions are mapped onto OTFs using temperatures from (1) a half-space cooling model, and (2) a thermal model that incorporates a visco-plastic rheology, non-Newtonian viscous flow and the effects of shear heating and hydrothermal circulation. Without introducing small-scale frictional heterogeneities on the fault, our model predicts that an OTF segment can transition between seismic and aseismic slip over many earthquake cycles, consistent with the multimode hypothesis for OTF ruptures. The average seismic coupling coefficient χ is strongly dependent on the ratio of seismogenic zone width W to earthquake nucleation size h*; χ increases by four orders of magnitude as W/h* increases from ~ 1 to 2. Specifically, the average χ = 0.15 +/- 0.05 derived from global OTF earthquake catalogs can be reached at W/h* ≈ 1.2-1.7. The modeled largest earthquake rupture area is less than the total seismogenic area and we predict a deficiency of large earthquakes on long transforms, which is also consistent with observations. Earthquake magnitude and distribution on the Gofar (East Pacific Rise) and Romanche (equatorial Mid-Atlantic) transforms are better predicted using the visco-plastic model than the half-space cooling model. We will also investigate how fault gouge porosity variation during an OTF earthquake nucleation phase may affect the seismic wave velocity structure, for which up to 3% drop was observed prior to the 2008 Mw6 Gofar earthquake.

  20. From a physical approach to earthquake prediction, towards long and short term warnings ahead of large earthquakes

    NASA Astrophysics Data System (ADS)

    Stefansson, R.; Bonafede, M.

    2012-04-01

    For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquake prediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in Earthquake Prediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquake prediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

  1. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  2. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  3. Very short-term earthquake precursors from GPS signal interference: Case studies on moderate and large earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, Yu-Lien; Cheng, Kai-Chien; Wang, Wei-Hau; Yu, Shui-Beih

    2016-04-01

    We set up a GPS network with 17 Continuous GPS (CGPS) stations in southwestern Taiwan to monitor real-time crustal deformation. We found that systematic perturbations in GPS signals occurred just a few minutes prior to the occurrence of several moderate and large earthquakes, including the recent 2013 Nantou (ML = 6.5) and Rueisuei (ML = 6.4) earthquakes in Taiwan. The anomalous pseudorange readings were several millimeters higher or lower than those in the background time period. These systematic anomalies were found as a result of interference of GPS L-band signals by electromagnetic emissions (EMs) prior to the mainshocks. The EMs may occur in the form of harmonic or ultra-wide-band radiation and can be generated during the formation of Mode I cracks at the final stage of earthquake nucleation. We estimated the directivity of the likely EM sources by calculating the inner product of the position vector from a GPS station to a given satellite and the vector of anomalous ground motions recorded by the GPS. The results showed that the predominant inner product generally occurred when the satellite was in the direction either toward or away from the epicenter with respect to the GPS network. Our findings suggest that the GPS network may serve as a powerful tool to detect very short-term earthquake precursors and presumably to locate a large earthquake before it occurs.

  4. Lotung large-scale seismic test strong motion records

    SciTech Connect

    Not Available

    1992-03-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4 scale and 1/12 scale) of a nuclear plant concrete containment structure at a seismically active site in Lotung, Taiwan. Extensive instrumentation was deployed to record both structural and ground responses during earthquakes. The experiment, generally referred to as the Lotung Large-Scale Seismic Test (LSST), was used to gather data for soil-structure interaction (SSI) analysis method evaluation and validation as well as for site ground response investigation. A number of earthquakes having local magnitudes ranging from 4.5 to 7.0 have been recorded at the LSST site since the completion of the test facility in September 1985. This report documents the earthquake data, both raw and processed, collected from the LSST experiment. Volume 1 of the report provides general information on site location, instrument types and layout, data acquisition and processing, and data file organization. The recorded data are described chronologically in subsequent volumes of the report.

  5. New model on the relations between surface uplift and erosion caused by large, compressional earthquakes

    NASA Astrophysics Data System (ADS)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2015-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new, seismologically consistent model of earthquake induced landsliding allows us to explore the importance of parameters such as earthquake depth and landscape steepness. In order to assess the earthquake mass balance for various scenarios, we have compared the expected eroded volume with co-seismic surface uplift computed with Okada's deformation theory. We have found the earthquake depth and landscape steepness to be dominant parameters compared to the fault geometry (dip and rake). In contrast with previous studies we have found that the largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. We have explored the long term evolution of topography under seismic forcing, with a Gutenberg Richter distribution or a characteristic earthquake model, on a fault system with different geometries and tectonic styles, such as transpressive or flat-and-ramp geometry, with thinned or thickened seismogenic layer.

  6. The 2002 Denali fault earthquake, Alaska: A large magnitude, slip-partitioned event

    USGS Publications Warehouse

    Eberhart-Phillips, D.; Haeussler, P.J.; Freymueller, J.T.; Frankel, A.D.; Rubin, C.M.; Craw, P.; Ratchkovski, N.A.; Anderson, G.; Carver, G.A.; Crone, A.J.; Dawson, T.E.; Fletcher, H.; Hansen, R.; Harp, E.L.; Harris, R.A.; Hill, D.P.; Hreinsdottir, S.; Jibson, R.W.; Jones, L.M.; Kayen, R.; Keefer, D.K.; Larsen, C.F.; Moran, S.C.; Personius, S.F.; Plafker, G.; Sherrod, B.; Sieh, K.; Sitar, N.; Wallace, W.K.

    2003-01-01

    The MW (moment magnitude) 7.9 Denali fault earthquake on 3 November 2002 was associated with 340 kilometers of surface rupture and was the largest strike-slip earthquake in North America in almost 150 years. It illuminates earthquake mechanics and hazards of large strike-slip faults. It began with thrusting on the previously unrecognized Susitna Glacier fault, continued with right-slip on the Denali fault, then took a right step and continued with right-slip on the Totschunda fault. There is good correlation between geologically observed and geophysically inferred moment release. The earthquake produced unusually strong distal effects in the rupture propagation direction, including triggered seismicity.

  7. Occurrences of large-magnitude earthquakes in the Kachchh region, Gujarat, western India: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Khan, Prosanta Kumar; Mohanty, Sarada Prasad; Sinha, Sushmita; Singh, Dhananjay

    2016-06-01

    Moderate-to-large damaging earthquakes in the peninsular part of the Indian plate do not support the long-standing belief of the seismic stability of this region. The historical record shows that about 15 damaging earthquakes with magnitudes from 5.5 to ~ 8.0 occurred in the Indian peninsula. Most of these events were associated with the old rift systems. Our analysis of the 2001 Bhuj earthquake and its 12-year aftershock sequence indicates a seismic zone bound by two linear trends (NNW and NNE) that intersect an E-W-trending graben. The Bouguer gravity values near the epicentre of the Bhuj earthquake are relatively low (~ 2 mgal). The gravity anomaly maps, the distribution of earthquake epicentres, and the crustal strain-rate patterns indicate that the 2001 Bhuj earthquake occurred along a fault within strain-hardened mid-crustal rocks. The collision resistance between the Indian plate and the Eurasian plate along the Himalayas and anticlockwise rotation of the Indian plate provide the far-field stresses that concentrate within a fault-bounded block close to the western margin of the Indian plate and is periodically released during earthquakes, such as the 2001 MW 7.7 Bhuj earthquake. We propose that the moderate-to-large magnitude earthquakes in the deeper crust in this area occur along faults associated with old rift systems that are reactivated in a strain-hardened environment.

  8. The Long-Run Socio-Economic Consequences of a Large Disaster: The 1995 Earthquake in Kobe

    PubMed Central

    duPont IV, William; Noy, Ilan; Okuyama, Yoko; Sawada, Yasuyuki

    2015-01-01

    We quantify the ‘permanent’ socio-economic impacts of the Great Hanshin-Awaji (Kobe) earthquake in 1995 by employing a large-scale panel dataset of 1,719 cities, towns, and wards from Japan over three decades. In order to estimate the counterfactual—i.e., the Kobe economy without the earthquake—we use the synthetic control method. Three important empirical patterns emerge: First, the population size and especially the average income level in Kobe have been lower than the counterfactual level without the earthquake for over fifteen years, indicating a permanent negative effect of the earthquake. Such a negative impact can be found especially in the central areas which are closer to the epicenter. Second, the surrounding areas experienced some positive permanent impacts in spite of short-run negative effects of the earthquake. Much of this is associated with movement of people to East Kobe, and consequent movement of jobs to the metropolitan center of Osaka, that is located immediately to the East of Kobe. Third, the furthest areas in the vicinity of Kobe seem to have been insulated from the large direct and indirect impacts of the earthquake. PMID:26426998

  9. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ??? 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change ???????? ??? 0.01 MPa) the Ms ??? 7.0 shocks are associated with calculated shear stress increases, while ???39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ???7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ??? 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  10. Global Omori law decay of triggered earthquakes: large aftershocks outside the classical aftershock zone

    USGS Publications Warehouse

    Parsons, Tom

    2002-01-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  11. Global Omori law decay of triggered earthquakes: Large aftershocks outside the classical aftershock zone

    NASA Astrophysics Data System (ADS)

    Parsons, Tom

    2002-09-01

    Triggered earthquakes can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ˜7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.

  12. Scaling and critcal phenomena in a cellular automaton slider-block model for earthquakes

    SciTech Connect

    Rundle, J.B. ); Klein, W. )

    1993-07-01

    The dynamics of a general class of two-dimensional cellular automaton slider-block models of earthquake faults is studied as a function of the failure rules that determine slip and the nature of the failure threshold. Scaling properties of clusters of failed sites imply the existence of a mean-field spinodal line in systems with spatially random failure thresholds, whereas spatially uniform failure thresholds produce behavior reminiscent of self-organized critical behavior. This model can describe several classes of faults, ranging from those that only exhibit creep to those that produce large events. 16 refs., 4 figs.

  13. The quest for better quality-of-life - learning from large-scale shaking table tests

    NASA Astrophysics Data System (ADS)

    Nakashima, M.; Sato, E.; Nagae, T.; Kunio, F.; Takahito, I.

    2010-12-01

    Earthquake engineering has its origins in the practice of “learning from actual earthquakes and earthquake damages.” That is, we recognize serious problems by witnessing the actual damage to our structures, and then we develop and apply engineering solutions to solve these problems. This tradition in earthquake engineering, i.e., “learning from actual damage,” was an obvious engineering response to earthquakes and arose naturally as a practice in a civil and building engineering discipline that traditionally places more emphasis on experience than do other engineering disciplines. But with the rapid progress of urbanization, as society becomes denser, and as the many components that form our society interact with increasing complexity, the potential damage with which earthquakes threaten the society also increases. In such an era, the approach of ”learning from actual earthquake damages” becomes unacceptably dangerous and expensive. Among the practical alternatives to the old practice is to “learn from quasi-actual earthquake damages.” One tool for experiencing earthquake damages without attendant catastrophe is the large shaking table. E-Defense, the largest one we have, was developed in Japan after the 1995 Hyogoken-Nanbu (Kobe) earthquake. Since its inauguration in 2005, E-Defense has conducted over forty full-scale or large-scale shaking table tests, applied to a variety of structural systems. The tests supply detailed data on actual behavior and collapse of the tested structures, offering the earthquake engineering community opportunities to experience and assess the actual seismic performance of the structures, and to help society prepare for earthquakes. Notably, the data were obtained without having to wait for the aftermaths of actual earthquakes. Earthquake engineering has always been about life safety, but in recent years maintaining the quality of life has also become a critical issue. Quality-of-life concerns include nonstructural

  14. The characteristic of the building damage from historical large earthquakes in Kyoto

    NASA Astrophysics Data System (ADS)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  15. Some Considerations on a Large Landslide at the Left Bank of the Aratozawa Dam Caused by the 2008 Iwate-Miyagi Intraplate Earthquake

    NASA Astrophysics Data System (ADS)

    Aydan, Ömer

    2016-06-01

    The scale and impact of rock slope failures are very large and the form of failure differs depending upon the geological structures of slopes. The 2008 Iwate-Miyagi intraplate earthquake induced many large-scale slope failures, despite the magnitude of the earthquake being of intermediate scale. Among large-scale slope failures, the landslide at the left bank of the Aratozawa Dam site is of great interest to specialists of rock mechanics and rock engineering. Although the slope failure was of planar type, the direction of sliding was luckily towards the sub-valley, so that the landslide did not cause great tsunami-like motion of reservoir fluid. In this study, the author attempts to describe the characteristics of the landslide, strong motion and permanent ground displacement induced by the 2008 Iwate-Miyagi intraplate earthquake, which had great effects on the triggering and evolution of the landslide.

  16. Quiet zone within a seismic gap near western Nicaragua: Possible location of a future large earthquake

    USGS Publications Warehouse

    Harlow, D.H.; White, R.A.; Cifuentes, I.L.; Aburto, Q.A.

    1981-01-01

    A 5700-square-kilometer quiet zone occurs in the midst of the locations of more than 4000 earthquakes off the Pacific coast of Nicaragua. The region is indicated by the seismic gap technique to be a likely location for an earthquake of magnitude larger than 7. The quiet zone has existed since at least 1950; the last large earthquake originating from this area occurred in 1898 and was of magnitude 7.5. A rough estimate indicates that the magnitude of an earthquake rupturing the entire quiet zone could be as large as that of the 1898 event. It is not yet possible to forecast a time frame for the occurrence of such an earthquake in the quiet zone. Copyright ?? 1981 AAAS.

  17. Gravity Wave Disturbances in the F-Region Ionosphere Above Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Bruff, Margie

    The direction of propagation, duration and wavelength of gravity waves in the ionosphere above large earthquakes were studied using data from the Super Dual Auroral Radar Network. Ground scatter data were plotted versus range and time to identify gravity waves as alternating focused and de-focused regions of radar power in wave-like patterns. The wave patterns before and after earthquakes were analyzed to determine the directions of propagation and wavelengths. Conditions were considered 48 hours before and after each identified disturbances to exclude waves from geomagnetic activity. Gravity waves were found travelling away from the epicenter before all six earthquakes for which data were available and after four of the six earthquakes. Gravity waves travelled in at least two directions away from the epicenter in all cases, and even stronger patterns were found for two earthquakes. Waves appeared, on average, 4 days before, persisting 2-3 hours, and 1-2 days after earthquakes, persisting 4-6 hours. Most wavelengths were between 200-300 km. We show a possible correlation between magnitude and depth of earthquakes and gravity wave patterns, but study of more earthquakes is required. This study provides a better understanding of the causes of ionospheric gravity wave disturbances and has potential applications for predicting earthquakes.

  18. In Japan, seismic waves slower after rain, large earthquakes

    NASA Astrophysics Data System (ADS)

    Schultz, Colin

    2012-03-01

    An earthquake is first detected by the abrupt side-to-side jolt of a passing primary wave. Lagging only slightly behind are shear waves, which radiate out from the earthquake's epicenter and are seen at the surface as a rolling wave of vertical motion. Also known as secondary or S waves, shear waves cause the lifting and twisting motions that are particularly effective at collapsing surface structures. With their capacity to cause damage, making sense of anything that can influence shear wave vertical velocities is important from both theoretical and engineering perspectives.

  19. The AD 365 earthquake: high resolution tsunami inundation for Crete and full scale simulation exercise

    NASA Astrophysics Data System (ADS)

    Kalligeris, N.; Flouri, E.; Okal, E.; Synolakis, C.

    2012-04-01

    In the eastern Mediterranean, historical and archaeological records document major earthquake and tsunami events in the past 2000 year (Ambraseys and Synolakis, 2010). The 1200km long Hellenic Arc has allegedly caused the strongest reported earthquakes and tsunamis in the region. Among them, the AD 365 and AD 1303 tsunamis have been extensively documented. They are likely due to ruptures of the Central and Eastern segments of the Hellenic Arc, respectively. Both events had widespread impact due to ground shaking, and e triggered tsunami waves that reportedly affected the entire eastern Mediterranean. The seismic mechanism of the AD 365 earthquake, located in western Crete, has been recently assigned a magnitude ranging from 8.3 to 8.5 by Shaw et al., (2008), using historical, sedimentological, geomorphic and archaeological evidence. Shaw et al (2008) have inferred that such large earthquakes occur in the Arc every 600 to 800 years, with the last known the AD 1303 event. We report on a full-scale simulation exercise that took place in Crete on 24-25 October 2011, based on a scenario sufficiently large to overwhelm the emergency response capability of Greece and necessitating the invocation of the Monitoring and Information Centre (MIC) of the EU and triggering help from other nations . A repeat of the 365 A.D. earthquake would likely overwhelm the civil defense capacities of Greece. Immediately following the rupture initiation it will cause substantial damage even to well-designed reinforced concrete structures in Crete. Minutes after initiation, the tsunami generated by the rapid displacement of the ocean floor would strike nearby coastal areas, inundating great distances in areas of low topography. The objective of the exercise was to help managers plan search and rescue operations, identify measures useful for inclusion in the coastal resiliency index of Ewing and Synolakis (2011). For the scenario design, the tsunami hazard for the AD 365 event was assessed for

  20. Formulation and Application of a Physically-Based Rupture Probability Model for Large Earthquakes on Subduction Zones: A Case Study of Earthquakes on Nazca Plate

    NASA Astrophysics Data System (ADS)

    Mahdyiar, M.; Galgana, G.; Shen-Tu, B.; Klein, E.; Pontbriand, C. W.

    2014-12-01

    Most time dependent rupture probability (TDRP) models are basically designed for a single-mode rupture, i.e. a single characteristic earthquake on a fault. However, most subduction zones rupture in complex patterns that create overlapping earthquakes of different magnitudes. Additionally, the limited historic earthquake data does not provide sufficient information to estimate reliable mean recurrence intervals for earthquakes. This makes it difficult to identify a single characteristic earthquake for TDRP analysis. Physical models based on geodetic data have been successfully used to obtain information on the state of coupling and slip deficit rates for subduction zones. Coupling information provides valuable insight into the complexity of subduction zone rupture processes. In this study we present a TDRP model that is formulated based on subduction zone slip deficit rate distribution. A subduction zone is represented by an integrated network of cells. Each cell ruptures multiple times from numerous earthquakes that have overlapping rupture areas. The rate of rupture for each cell is calculated using a moment balance concept that is calibrated based on historic earthquake data. The information in conjunction with estimates of coseismic slip from past earthquakes is used to formulate time dependent rupture probability models for cells. Earthquakes on the subduction zone and their rupture probabilities are calculated by integrating different combinations of cells. The resulting rupture probability estimates are fully consistent with the state of coupling of the subduction zone and the regional and local earthquake history as the model takes into account the impact of all large (M>7.5) earthquakes on the subduction zone. The granular rupture model as developed in this study allows estimating rupture probabilities for large earthquakes other than just a single characteristic magnitude earthquake. This provides a general framework for formulating physically

  1. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  2. [The dental professional action and aim to the struggle for large earthquake].

    PubMed

    Li, Gang

    2008-06-01

    On May 12, 2008, a magnitude 8 earthquake struck the eastern Sichuan Province in China. The quake could be felt as far away as Bangkok, Thailand, Taiwan, Vietnam, Shanghai, and Beijing. Officials say that at least 69170 may have been killed and local reports indicate over 374159 injured till June 16, 2008. A study of the dental professional action to the struggle for the Sichuan large earthquake is reported. The dental professional action and aim to the struggle for large earthquake are discussed. It is believed that dental professional personals must make more specific contribution in quake-hit areas in the future via supplies of well organized services. PMID:18661058

  3. Three-dimensional distribution of ionospheric anomalies prior to three large earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    He, Liming; Heki, Kosuke

    2016-07-01

    Using regional Global Positioning System (GPS) networks, we studied three-dimensional spatial structure of ionospheric total electron content (TEC) anomalies preceding three recent large earthquakes in Chile, South America, i.e., the 2010 Maule (Mw 8.8), the 2014 Iquique (Mw 8.2), and the 2015 Illapel (Mw 8.3) earthquakes. Both positive and negative TEC anomalies, with areal extent dependent on the earthquake magnitudes, appeared simultaneously 20-40 min before the earthquakes. For the two midlatitude earthquakes (2010 Maule and 2015 Illapel), positive anomalies occurred to the north of the epicenters at altitudes 150-250 km. The negative anomalies occurred farther to the north at higher altitudes 200-500 km. This lets the epicenter, the positive and negative anomalies align parallel with the local geomagnetic field, which is a typical structure of ionospheric anomalies occurring in response to positive surface electric charges.

  4. Preliminary investigation of some large landslides triggered by the 2008 Wenchuan earthquake, Sichuan Province, China

    USGS Publications Warehouse

    Wang, F.; Cheng, Q.; Highland, L.; Miyajima, M.; Wang, Hongfang; Yan, C.

    2009-01-01

    The M s 8.0 Wenchuan earthquake or "Great Sichuan Earthquake" occurred at 14:28 p.m. local time on 12 May 2008 in Sichuan Province, China. Damage by earthquake-induced landslides was an important part of the total earthquake damage. This report presents preliminary observations on the Hongyan Resort slide located southwest of the main epicenter, shallow mountain surface failures in Xuankou village of Yingxiu Town, the Jiufengchun slide near Longmenshan Town, the Hongsong Hydro-power Station slide near Hongbai Town, the Xiaojiaqiao slide in Chaping Town, two landslides in Beichuan County-town which destroyed a large part of the town, and the Donghekou and Shibangou slides in Qingchuan County which formed the second biggest landslide lake formed in this earthquake. The influences of seismic, topographic, geologic, and hydro-geologic conditions are discussed. ?? 2009 Springer-Verlag.

  5. What controls the location where large earthquakes nucleate along the North Anatolian Fault ?

    NASA Astrophysics Data System (ADS)

    Bouchon, M.; Karabulut, H.; Schmittbuhl, J.; Durand, V.; Marsan, D.; Renard, F.

    2012-12-01

    We review several sets of observations which suggest that the location of the epicenters of the 1939-1999 sequence of large earthquakes along the NAF obeys some mechanical logic. The 1999 Izmit earthquake nucleated in a zone of localized crustal extension oriented N10E (Crampin et al., 1985; Evans et al., 1987), nearly orthogonal to the strike of the NAF, thus releasing the normal stress on the fault in the area and facilitating rupture nucleation. The 1999 Duzce epicenter, located about 25km from the end of the Izmit rupture, is precisely near the start of a simple linear segment of the fault (Pucci et al., 2006) where supershear rupture occurred (Bouchon et al., 2001, Konca et al., 2010). Aftershock locations of the Izmit earthquake in the region (Gorgun et al., 2009) show that Duzce, at its start, was the first significant Izmit aftershock to occur on this simple segment. The rupture nucleated on the part of this simple segment which had been most loaded in Coulomb stress by the Izmit earthquake. Once rupture of this segment began, it seems logical that the whole segment would break, as its simple geometry suggests that no barrier was present to arrest rupture. Rupture of this segment, in turn, led to the rupture of adjacent segments. Like the Izmit earthquake, the 1943 Tosya and the 1944 Bolu-Gerede earthquakes nucleated near a zone of localized crustal extension. The long-range delayed triggering of extensional clusters observed after the Izmit/Duzce earthquakes (Durand et al., 2010) suggests a possible long-range delayed triggering of the 1943 shock by the 1942 Niksar earthquake. The 1942, 1957 Albant and 1967 Mudurnu earthquake nucleation locations further suggest that like what is observed for the Duzce earthquake, the previous earthquake ruptures stopped when encountering geometrically complex segments and nucleated again, past these segments.

  6. The Diversity of Large Earthquakes and Its Implications for Hazard Mitigation

    NASA Astrophysics Data System (ADS)

    Kanamori, Hiroo

    2014-05-01

    With the advent of broadband seismology and GPS, significant diversity in the source radiation spectra of large earthquakes has been clearly demonstrated. This diversity requires different approaches to mitigate hazards. In certain tectonic environments, seismologists can forecast the future occurrence of large earthquakes within a solid scientific framework using the results from seismology and GPS. Such forecasts are critically important for long-term hazard mitigation practices, but because stochastic fracture processes are complex, the forecasts are inevitably subject to large uncertainty, and unexpected events will continue to surprise seismologists. Recent developments in real-time seismology will help seismologists to cope with and prepare for tsunamis and earthquakes. Combining a better understanding of earthquake diversity with modern technology is the key to effective and comprehensive hazard mitigation practices.

  7. Seismic gaps and source zones of recent large earthquakes in coastal Peru

    USGS Publications Warehouse

    Dewey, J.W.; Spence, W.

    1979-01-01

    The earthquakes of central coastal Peru occur principally in two distinct zones of shallow earthquake activity that are inland of and parallel to the axis of the Peru Trench. The interface-thrust (IT) zone includes the great thrust-fault earthquakes of 17 October 1966 and 3 October 1974. The coastal-plate interior (CPI) zone includes the great earthquake of 31 May 1970, and is located about 50 km inland of and 30 km deeper than the interface thrust zone. The occurrence of a large earthquake in one zone may not relieve elastic strain in the adjoining zone, thus complicating the application of the seismic gap concept to central coastal Peru. However, recognition of two seismic zones may facilitate detection of seismicity precursory to a large earthquake in a given zone; removal of probable CPI-zone earthquakes from plots of seismicity prior to the 1974 main shock dramatically emphasizes the high seismic activity near the rupture zone of that earthquake in the five years preceding the main shock. Other conclusions on the seismicity of coastal Peru that affect the application of the seismic gap concept to this region are: (1) Aftershocks of the great earthquakes of 1966, 1970, and 1974 occurred in spatially separated clusters. Some clusters may represent distinct small source regions triggered by the main shock rather than delimiting the total extent of main-shock rupture. The uncertainty in the interpretation of aftershock clusters results in corresponding uncertainties in estimates of stress drop and estimates of the dimensions of the seismic gap that has been filled by a major earthquake. (2) Aftershocks of the great thrust-fault earthquakes of 1966 and 1974 generally did not extend seaward as far as the Peru Trench. (3) None of the three great earthquakes produced significant teleseismic activity in the following month in the source regions of the other two earthquakes. The earthquake hypocenters that form the basis of this study were relocated using station

  8. Earthquake!

    ERIC Educational Resources Information Center

    Hernandez, Hildo

    2000-01-01

    Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

  9. Benefits of Earthquake Early Warning to Large Municipalities (Invited)

    NASA Astrophysics Data System (ADS)

    Featherstone, J.

    2013-12-01

    The City of Los Angeles has been involved in the testing of the Cal Tech Shake Alert, Earthquake Early Warning (EQEW) system, since February 2012. This system accesses a network of seismic monitors installed throughout California. The system analyzes and processes seismic information, and transmits a warning (audible and visual) when an earthquake occurs. In late 2011, the City of Los Angeles Emergency Management Department (EMD) was approached by Cal Tech regarding EQEW, and immediately recognized the value of the system. Simultaneously, EMD was in the process of finalizing a report by a multi-discipline team that visited Japan in December 2011, which spoke to the effectiveness of EQEW for the March 11, 2011 earthquake that struck that country. Information collected by the team confirmed that the EQEW systems proved to be very effective in alerting the population of the impending earthquake. The EQEW in Japan is also tied to mechanical safeguards, such as the stopping of high-speed trains. For a city the size and complexity of Los Angeles, the implementation of a reliable EQEW system will save lives, reduce loss, ensure effective and rapid emergency response, and will greatly enhance the ability of the region to recovery from a damaging earthquake. The current Shake Alert system is being tested at several governmental organizations and private businesses in the region. EMD, in cooperation with Cal Tech, identified several locations internal to the City where the system would have an immediate benefit. These include the staff offices within EMD, the Los Angeles Police Department's Real Time Analysis and Critical Response Division (24 hour crime center), and the Los Angeles Fire Department's Metropolitan Fire Communications (911 Dispatch). All three of these agencies routinely manage the collaboration and coordination of citywide emergency information and response during times of crisis. Having these three key public safety offices connected and included in the

  10. Repeating and not so Repeating Large Earthquakes in the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hjorleifsdottir, V.; Singh, S.; Iglesias, A.; Perez-Campos, X.

    2013-12-01

    The rupture area and recurrence interval of large earthquakes in the mexican subduction zone are relatively small and almost the entire length of the zone has experienced a large (Mw≥7.0) earthquake in the last 100 years (Singh et al., 1981). Several segments have experienced multiple large earthquakes in this time period. However, as the rupture areas of events prior to 1973 are only approximately known, the recurrence periods are uncertain. Large earthquakes occurred in the Ometepec, Guerrero, segment in 1937, 1950, 1982 and 2012 (Singh et al., 1981). In 1982, two earthquakes (Ms 6.9 and Ms 7.0) occurred about 4 hours apart, one apparently downdip from the other (Astiz & Kanamori, 1984; Beroza et al. 1984). The 2012 earthquake on the other hand had a magnitude of Mw 7.5 (globalcmt.org), breaking approximately the same area as the 1982 doublet, but with a total scalar moment about three times larger than the 1982 doublet combined. It therefore seems that 'repeat earthquakes' in the Ometepec segment are not necessarily very similar one to another. The Central Oaxaca segment broke in large earthquakes in 1928 (Mw7.7) and 1978 (Mw7.7) . Seismograms for the two events, recorded at the Wiechert seismograph in Uppsala, show remarkable similarity, suggesting that in this area, large earthquakes can repeat. The extent to which the near-trench part of the fault plane participates in the ruptures is not well understood. In the Ometepec segment, the updip portion of the plate interface broke during the 25 Feb 1996 earthquake (Mw7.1), which was a slow earthquake and produced anomalously low PGAs (Iglesias et al., 2003). Historical records indicate that a great tsunamigenic earthquake, M~8.6, occurred in the Oaxaca region in 1787, breaking the Central Oaxaca segment together with several adjacent segments (Suarez & Albini 2009). Whether the updip portion of the fault broke in this event remains speculative, although plausible based on the large tsunami. Evidence from the

  11. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  12. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  13. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  14. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  15. Magnitudes and moment-duration scaling of low-frequency earthquakes beneath southern Vancouver Island

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Thomas, A. M.; Savard, G.; Chuang, L.; Rubin, A. M.

    2015-09-01

    We employ 130 low-frequency earthquake (LFE) templates representing tremor sources on the plate boundary below southern Vancouver Island to examine LFE magnitudes. Each template is assembled from hundreds to thousands of individual LFEs, representing over 269,000 independent detections from major episodic-tremor-and-slip (ETS) events between 2003 and 2013. Template displacement waveforms for direct P and S waves at near epicentral distances are remarkably simple at many stations, approaching the zero-phase, single pulse expected for a point dislocation source in a homogeneous medium. High spatiotemporal precision of template match-filtered detections facilitates precise alignment of individual LFE detections and analysis of waveforms. Upon correction for 1-D geometrical spreading, attenuation, free surface magnification and radiation pattern, we solve a large, sparse linear system for 3-D path corrections and LFE magnitudes for all detections corresponding to a single-ETS template. The spatiotemporal distribution of magnitudes indicates that typically half the total moment release occurs within the first 12-24 h of LFE activity during an ETS episode when tidal sensitivity is low. The remainder is released in bursts over several days, particularly as spatially extensive rapid tremor reversals (RTRs), during which tidal sensitivity is high. RTRs are characterized by large-magnitude LFEs and are most strongly expressed in the updip portions of the ETS transition zone and less organized at downdip levels. LFE magnitude-frequency relations are better described by power law than exponential distributions although they exhibit very high b values ≥˜5. We examine LFE moment-duration scaling by generating templates using detections for limiting magnitude ranges (MW<1.5, MW≥2.0). LFE duration displays a weaker dependence upon moment than expected for self-similarity, suggesting that LFE asperities are limited in fault dimension and that moment variation is dominated by

  16. Slip zone and energetics of a large earthquake from the Taiwan Chelungpu-fault Drilling Project.

    PubMed

    Ma, Kuo-Fong; Tanaka, Hidemi; Song, Sheng-Rong; Wang, Chien-Ying; Hung, Jih-Hao; Tsai, Yi-Ben; Mori, Jim; Song, Yen-Fang; Yeh, Eh-Chao; Soh, Wonn; Sone, Hiroki; Kuo, Li-Wei; Wu, Hung-Yu

    2006-11-23

    Determining the seismic fracture energy during an earthquake and understanding the associated creation and development of a fault zone requires a combination of both seismological and geological field data. The actual thickness of the zone that slips during the rupture of a large earthquake is not known and is a key seismological parameter in understanding energy dissipation, rupture processes and seismic efficiency. The 1999 magnitude-7.7 earthquake in Chi-Chi, Taiwan, produced large slip (8 to 10 metres) at or near the surface, which is accessible to borehole drilling and provides a rare opportunity to sample a fault that had large slip in a recent earthquake. Here we present the retrieved cores from the Taiwan Chelungpu-fault Drilling Project and identify the main slip zone associated with the Chi-Chi earthquake. The surface fracture energy estimated from grain sizes in the gouge zone of the fault sample was directly compared to the seismic fracture energy determined from near-field seismic data. From the comparison, the contribution of gouge surface energy to the earthquake breakdown work is quantified to be 6 per cent. PMID:17122854

  17. Appearance ratio of earthquake surface rupture - About scaling low for Japanese Intraplate Earthquakes -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Irikura, K.

    2013-12-01

    A study on appearance ratio of the surface rupture is considered on using historical earthquake (ex. Takemura, 1998), also Kagawa et al (2004) evaluated the probability based on a numerical simulation of surface displacements. The estimated appearance indicates a sigmoid curve and rise sharply between Mj (Japan Meteorological Agency magnitude) =6.5 and Mj=7.2. However, these historical earthquake record between Mj = 6.5 to 7.2 are very law, therefore some scientist consider that the appearance ratio might be jumped up discontinuity between Mj = 6.5 to 7.2. In this study, we used historical intraplate earthquakes that were occurred around Japan from 1981 Nobi to 2013. Especially, after Hyogoken Nanbu Earthquake, many earthquakes around Mj 6.5 to 7.2 were occurred. The result of this study indicate that the appearance ratio increase between Mj = 6.5 to 7.2 not discontinuity but like logistic curve. Youngs et al. (2003), Petersen et al. (2011) and Moss and Ross (2011) are discussed about appearance ratio of the surface rupture using historical earthquake in the world. Their discussion are based on Mw, therefore, we cannot compare each other because we used Mj. Takemura (1990) were proposed a conversion equation that is Mw = 0.78Mj+1.08. However, nowadays Central Disaster Prevention Council in Japan (2005) derive a conversion equation that is Mw = 0.879Mj+0.536 shown in a regression line demanded by a principal component analysis The result of this study, the appearance ratio increase sharply between Mw = 6.3 to 7.0.

  18. European Scale Earthquake Data Exchange: ORFEUS-EMSC Joint Initiatives

    NASA Astrophysics Data System (ADS)

    Bossu, R.; van Eck, T.

    2003-04-01

    The European-Mediterranean Seismological Centre (EMSC) and the Observatories and Research Facilities for European Seismology (ORFEUS) are both active international organisations with different co-ordinating roles within European seismology. Both are non-governmental non-profit organisations, which have members/participants in more than 30 countries in Europe and its surroundings. Although different, their activities are complementary with ORFEUS focusing on broadband waveform data archiving and dissemination and EMSC focusing on seismological parameter data. The main EMSC activities are the alert system for potentially damaging earthquakes, a real time seismicity web page, the production of the Euro-Med. seismological bulletin, and the creation and maintenance of databases related to seismic hazard. All these activities are based on data contributions from seismological Institutes. The EMSC is also involved in a UNESCO programme to promote seismology and data exchange in the Middle-East and Northern Africa. ORFEUS aims at co-ordinating and promoting digital broadband seismology in Europe. To accomplish this, it operates a Data Centre to archive and distribute high quality digital data for research, co-ordinates four working groups and provides services through the Internet. More recently through an EC-infrastructure project MEREDIAN it has accomplished added co-ordination of data exchange and archiving between large European national data centres and realised the Virtual European Broadband Seismograph Network (VEBSN). To accomplish higher efficiency and better services to the seismological community, ORFEUS and EMSC have been working towards a closer collaboration. Fruits of this collaboration are the joint EC project EMICES, a common Expression of Interest 'NERIES' submitted June 2002 to the EC , integration of the automatic picks from the VEBSN into the EMSC rapid alert system and collaboration on common web page developments. Presently, we collaborate in a

  19. Forecast of Large Earthquakes Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B.; Nava Pichardo, F. A.; Glowacka, E.; Gómez Treviño, E.; Dmowska, R.

    2016-08-01

    Large earthquakes have semi-periodic behavior as a result of critically self-organized processes of stress accumulation and release in seismogenic regions. Hence, large earthquakes in a given region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. In previous papers, it has been shown that it is possible to identify these sequences through Fourier analysis of the occurrence time series of large earthquakes from a given region, by realizing that not all earthquakes in the region need belong to the same sequence, since there can be more than one process of stress accumulation and release in the region. Sequence identification can be used to forecast earthquake occurrence with well determined confidence bounds. This paper presents improvements on the above mentioned sequence identification and forecasting method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification are considered, which means that earthquake occurrence times are treated as a labeled point process; a revised estimation of non-randomness probability is used; a better estimation of appropriate upper limit uncertainties to use in forecasts is introduced; and the use of Bayesian analysis to evaluate the posterior forecast performance is applied. This improved method was successfully tested on synthetic data and subsequently applied to real data from some specific regions. As an example of application, we show the analysis of data from the northeastern Japan Arc region, in which one semi-periodic sequence of four earthquakes with M ≥ 8.0, having high non-randomness probability was identified. We compare the results of this analysis with those of the unlabeled point process analysis.

  20. A Regional Scale Earthquake Simulator for Faults With Rate- and State-Dependent Frictional Properties

    NASA Astrophysics Data System (ADS)

    Richards-Dinger, K.; Dieterich, J.

    2006-12-01

    Long-term (~10,000 year) catalogs of simulated earthquakes can be used to address a host of questions related to both seismic hazard calculations and more fundamental issues of earthquake occurrence and interaction (e.g. Ward [1996], Rundle et al. [2004], Ziv and Rubin [2000, 2003]). The quasi-static models of Ziv and Rubin [2000, 2003] are based on the computational strategy of Dieterich [1995] for efficiently computing large numbers of earthquakes, including the seismic nucleation process on faults with rate- and state-dependent frictional properties. Both Dieterich [1995] and Ziv and Rubin [2000, 2003] considered only single planar faults embedded in a whole-space. Faults in nature are not geometrically flat nor do they exist in isolation but form complex networks. Slip of such networks involves processes and interactions that do not occur in planar fault models and may strongly affect earthquake processes. We are in the process of constructing simulations of earthquake occurrence in complex, regional-scale fault networks whose elements obey rate- and state-dependent frictional laws. The solutions of Okada [1992] for dislocations in an elastic half-space are used to calculate the stress interaction coefficients between the elements. We employ analytic solutions for the nucleation process that include the effects of time-varying normal stress. At the time of this abstract we have conducted initial experiments with a single 100 km x 15 km strike-slip fault which produce power-law magnitude distributions with reasonable b-values. The model is computationally efficient - simulations of 50,000 events on a fault with 1500 elements require about seven minutes on a single 2.5 GHz CPU. The very largest events (which rupture nearly the entire fault) occur quasi-periodically, whereas the entire catalog displays temporal clustering in that its waiting time distribution is a power-law with a slope similar to that observed for actual seismicity in both California and Iceland

  1. Maximum Magnitude and Recurrence Interval for the Large Earthquakes in the Central and Eastern United States

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Hu, C.

    2012-12-01

    Maximum magnitude and recurrence interval of the large earthquakes are key parameters for seismic hazard assessment in the central and eastern United States. Determination of these two parameters is quite difficult in the region, however. For example, the estimated maximum magnitudes of the 1811-12 New Madrid sequence are in the range of M6.6 to M8.2, whereas the estimated recurrence intervals are in the range of about 500 to several thousand years. These large variations of maximum magnitude and recurrence interval for the large earthquakes lead to significant variation of estimated seismic hazards in the central and eastern United States. There are several approaches being used to estimate the magnitudes and recurrence intervals, such as historical intensity analysis, geodetic data analysis, and paleo-seismic investigation. We will discuss the approaches that are currently being used to estimate maximum magnitude and recurrence interval of the large earthquakes in the central United States.

  2. Analysis of Luminescence Away from the Epicenter During a Large Earthquake: The Pisco, Peru Mw8 Earthquake

    NASA Astrophysics Data System (ADS)

    Heraud, J. A.; Lira, J. A.

    2011-12-01

    The Mw8.0 earthquake in Pisco, Peru of August 15, 2007, produced high damage with a toll of 513 people dead, 2,291 wounded, 76,000 houses and buildings seriously damaged and 431,000 people overall affected. Co-seismic luminescence was reported by thousands of people along the central coast of Peru and especially in Lima, 150 km from the epicenter, being this the first large nighttime earthquake in about 100 years in a highly populated area. Pictures and videos of the lights are available, however those obtained so far, had little information on the timing and direction of the reported lights. Two important videos are analyzed, the first one from a fixed security camera, in order to determine differential time correlation between the timing of the lights recorded with ground acceleration registered by a three-axis accelerometer 500m away and very good results have been observed. This evidence contains important color, shape and timing information which is shown to be highly differential time correlated with the arrival of the seismic waves. Furthermore, the origin of the lights is on the top of a hilly island about 6 km off the coast of Lima where lights were reported in a written chronicle, to have been seen exactly 21 days before the Mega earthquake of October 28, 1746. This was the largest ever to happen in Peru, and produced a Tsunami that washed the port of Callao and reached up to 5km inland. The second video, from another security camera, in a different location, has been further analyzed in order to determine more exactly the direction of the lights and this new evidence will be presented. The fact that a notoriously large and well documented co-seismic luminous phenomena was video recorded more than 150 km from the epicenter during a very large earthquake, is emphasized together with historical documented evidence of pre-seismic luminous activity on the same island, during a mega earthquake of enormous proportions in Lima. Both previously mentioned videos

  3. Giant seismites and megablock uplift in the East African Rift: Evidence for large magnitude Late Pleistocene earthquakes

    NASA Astrophysics Data System (ADS)

    Hilbert-Wolf, Hannah; Roberts, Eric

    2015-04-01

    Due to rapid population growth and urbanization of many parts of East Africa, it is increasingly important to quantify the risk and possible destruction from large-magnitude earthquakes along the tectonically active East African Rift System. However, because comprehensive instrumental seismic monitoring, historical records, and fault trench investigations are limited for this region, the sedimentary record provides important archives of seismicity in the form of preserved soft-sediment deformation features (seismites). Extensive, previously undescribed seismites of centimeter- to dekameter-scale were identified by our team in alluvial and lacustrine facies of the Late Quaternary-Recent Lake Beds Succession in the Rukwa Rift Basin, of the Western Branch of the East African Rift System. We document the most highly deformed sediments in shallow, subsurface strata close to the regional capital of Mbeya, Tanzania, primarily exposed at two, correlative outcrop localities ~35 km apart. This includes a remarkable, clastic 'megablock complex' that preserves remobilized sediment below vertically displaced breccia megablocks, some in excess of 20 m-wide. The megablock complex is comprised of (1) a 5m-tall by 20m-wide injected body of volcanic ash and silt that hydraulically displaced (2) an equally sized, semi-consolidated, volcaniclastic megablock; both of which are intruded by (3) a clastic injection dyke. Evidence for breaching at the surface and for the fluidization of cobbles demonstrates the susceptibility of the substrate in this region to significant deformation via seismicity. Thirty-five km to the north, dekameter-scale asymmetrical/recumbent folds occur in a 3 m-thick, flat lying lake floor unit of the Lake Beds Succession. In between and surrounding these two unique sites, smaller-scale seismites are expressed, including flame structures; cm- to m-scale folded beds; ball-and-pillow structures; syn-sedimentary faults; sand injection features; and m-dkm-scale

  4. Seismic sequences, swarms, and large earthquakes in Italy

    NASA Astrophysics Data System (ADS)

    Amato, Alessandro; Piana Agostinetti, Nicola; Selvaggi, Giulio; Mele, Franco

    2016-04-01

    In recent years, particularly after the L'Aquila 2009 earthquake and the 2012 Emilia sequence, the issue of earthquake predictability has been at the center of the discussion in Italy, not only within the scientific community but also in the courtrooms and in the media. Among the noxious effects of the L'Aquila trial there was an increase of scaremongering and false alerts during earthquake sequences and swarms, culminated in a groundless one-night evacuation in northern Tuscany in 2013. We have analyzed the Italian seismicity of the last decades in order to determine the rate of seismic sequences and investigate some of their characters, including frequencies, min/max durations, maximum magnitudes, main shock timing, etc. Selecting only sequences with an equivalent magnitude of 3.5 or above, we find an average of 30 sequences/year. Although there is an extreme variability in the examined parameters, we could set some boundaries, useful to obtain some quantitative estimates of the ongoing activity. In addition, the historical catalogue is rich of complex sequences in which one main shock is followed, seconds, days or months later, by another event with similar or higher magnitude We also analysed the Italian CPT11 catalogue (Rovida et al., 2011) between 1950 and 2006 to highlight the foreshock-mainshock event couples that were suggested in previous studies to exist (e.g. six couples, Marzocchi and Zhuang, 2011). Moreover, to investigate the probability of having random foreshock-mainshock couples over the investigated period, we produced 1000 synthetic catalogues, randomly distributing in time the events occured in such period. Preliminary results indicate that: (1) all but one of the the so-called foreshock-mainshock pairs found in Marzocchi and Zhuang (2011) fall inside previously well-known and studied seismic sequences (Belice, Friuli and Umbria-Marche), meaning that suggested foreshocks are also aftershocks; and (2) due to the high-rate of the italian

  5. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  6. Constraining depth range of S wave velocity decrease after large earthquakes near Parkfield, California

    NASA Astrophysics Data System (ADS)

    Wu, Chunquan; Delorey, Andrew; Brenguier, Florent; Hadziioannou, Celine; Daub, Eric G.; Johnson, Paul

    2016-06-01

    We use noise correlation and surface wave inversion to measure the S wave velocity changes at different depths near Parkfield, California, after the 2003 San Simeon and 2004 Parkfield earthquakes. We process continuous seismic recordings from 13 stations to obtain the noise cross-correlation functions and measure the Rayleigh wave phase velocity changes over six frequency bands. We then invert the Rayleigh wave phase velocity changes using a series of sensitivity kernels to obtain the S wave velocity changes at different depths. Our results indicate that the S wave velocity decreases caused by the San Simeon earthquake are relatively small (~0.02%) and access depths of at least 2.3 km. The S wave velocity decreases caused by the Parkfield earthquake are larger (~0.2%), and access depths of at least 1.2 km. Our observations can be best explained by material damage and healing resulting mainly from the dynamic stress perturbations of the two large earthquakes.

  7. Irregular recurrence of large earthquakes along the san andreas fault: evidence from trees.

    PubMed

    Jacoby, G C; Sheppard, P R; Sieh, K E

    1988-07-01

    Old trees growing along the San Andreas fault near Wrightwood, California, record in their annual ring-width patterns the effects of a major earthquake in the fall or winter of 1812 to 1813. Paleoseismic data and historical information indicate that this event was the "San Juan Capistrano" earthquake of 8 December 1812, with a magnitude of 7.5. The discovery that at least 12 kilometers of the Mojave segment of the San Andreas fault ruptured in 1812, only 44 years before the great January 1857 rupture, demonstrates that intervals between large earthquakes on this part of the fault are highly variable. This variability increases the uncertainty of forecasting destructive earthquakes on the basis of past behavior and accentuates the need for a more fundamental knowledge of San Andreas fault dynamics. PMID:17841050

  8. The characteristics of quasistatic electric field perturbations observed by DEMETER satellite before large earthquakes

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Shen, X.; Zhao, S.; Yao, Lu; Ouyang, X.; Qian, J.

    2014-01-01

    This paper presents new results after processing the ULF electric field (DC-15 Hz) observed by DEMETER satellite (h = 660-710 km). Typical perturbations were picked up in quasistatic electric field around some large earthquakes in 2010 at first. And then, 27 earthquakes were selected to be analyzed on quasistatic electric field in two seismic regions of Indonesia and Chile at equatorial and middle latitude area respectively. Three-component electric field data related to earthquakes were collected along all the up-orbits (in local nighttime) in a limited distance of 2000 km to the epicenters during 9 days with 7 days before and 1 day after those cases, and totally 57 perturbations were found around them. All the results show that the amplitude of quasistatic electric field perturbations varies from 1.5 to 16 mV/m in the upper ionosphere, mostly smaller than 10 mV/m. And the perturbations were mainly located just over the epicentral area or at the end of seismic faults constructed by a series of earthquakes where electromagnetic emissions may be easily formed during preparation and development processes of seismic sequences. Among all 27 cases, there are 10 earthquakes with perturbations occurring just one day before, which demonstrates the close correlation in time domain between quasistatic electric field in ionosphere and large earthquakes. Finally, combined with in situ observation of plasma parameters, the coupling mechanism of quasistatic electric field in different earth spheres was discussed.

  9. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site

    SciTech Connect

    Chang, C.Y.; Mok, C.M.; Power, M.S. )

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M{sub L} 4.5 to M{sub L} 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes.

  10. An earthquake in Japan caused large waves in Norwegian fjords

    NASA Astrophysics Data System (ADS)

    Schult, Colin

    2013-08-01

    Early on a winter morning a few years ago, many residents of western Norway who lived or worked along the shores of the nation's fjords were startled to see the calm morning waters suddenly begin to rise and fall. Starting at around 7:15 A.M. local time and continuing for nearly 3 hours, waves up to 1.5 meters high coursed through the previously still fjord waters. The scene was captured by security cameras and by people with cell phones, reported to local media, and investigated by a local newspaper. Drawing on this footage, and using a computational model and observations from a nearby seismic station, Bondevik et al. identified the cause of the waves—the powerful magnitude 9.0 Tohoku earthquake that hit off the coast of Japan half an hour earlier.

  11. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  12. Evidence of a Large-Magnitude Recent Prehistoric Earthquake on the Bear River Fault, Wyoming and Utah: Implications for Recurrence

    NASA Astrophysics Data System (ADS)

    Hecker, S.; Schwartz, D. P.

    2015-12-01

    Trenching across the antithetic strand of the Bear River normal fault in Utah has exposed evidence of a very young surface rupture. AMS radiocarbon analysis of three samples comprising pine-cone scales and needles from a 5-cm-thick faulted layer of organic detritus indicates the earthquake occurred post-320 CAL yr. BP (after A.D. 1630). The dated layer is buried beneath topsoil and a 15-cm-high scarp on the forest floor. Prior to this study, the entire surface-rupturing history of this nascent normal fault was thought to consist of two large events in the late Holocene (West, 1994; Schwartz et al., 2012). The discovery of a third, barely pre-historic, event led us to take a fresh look at geomorphically youthful depressions on the floodplain of the Bear River that we had interpreted as possible evidence of liquefaction. The appearance of these features is remarkably similar to sand-blow craters formed in the near-field of the M6.9 1983 Borah Peak earthquake. We have also identified steep scarps (<2 m high) and a still-forming coarse colluvial wedge near the north end of the fault in Wyoming, indicating that the most recent event ruptured most or all of the 40-km length of the fault. Since first rupturing to the surface about 4500 years ago, the Bear River fault has generated large-magnitude earthquakes at intervals of about 2000 years, more frequently than most active faults in the region. The sudden initiation of normal faulting in an area of no prior late Cenozoic extension provides a basis for seismic hazard estimates of the maximum-magnitude background earthquake (earthquake not associated with a known fault) for normal faults in the Intermountain West.

  13. Instability model for recurring large and great earthquakes in southern California

    USGS Publications Warehouse

    Stuart, W.D.

    1985-01-01

    The locked section of the San Andreas fault in southern California has experienced a number of large and great earthquakes in the past, and thus is expected to have more in the future. To estimate the location, time, and slip of the next few earthquakes, an earthquake instability model is formulated. The model is similar to one recently developed for moderate earthquakes on the San Andreas fault near Parkfield, California. In both models, unstable faulting (the earthquake analog) is caused by failure of all or part of a patch of brittle, strain-softening fault zone. In the present model the patch extends downward from the ground surface to about 12 km depth, and extends 500 km along strike from Parkfield to the Salton Sea. The variation of patch strength along strike is adjusted by trial until the computed sequence of instabilities matches the sequence of large and great earthquakes since a.d. 1080 reported by Sieh and others. The last earthquake was the M=8.3 Ft. Tejon event in 1857. The resulting strength variation has five contiguous sections of alternately low and high strength. From north to south, the approximate locations of the sections are: (1) Parkfield to Bitterwater Valley, (2) Bitterwater Valley to Lake Hughes, (3) Lake Hughes to San Bernardino, (4) San Bernardino to Palm Springs, and (5) Palm Springs to the Salton Sea. Sections 1, 3, and 5 have strengths between 53 and 88 bars; sections 2 and 4 have strengths between 164 and 193 bars. Patch section ends and unstable rupture ends usually coincide, although one or more adjacent patch sections may fail unstably at once. The model predicts that the next sections of the fault to slip unstably will be 1, 3, and 5; the order and dates depend on the assumed length of an earthquake rupture in about 1700. ?? 1985 Birkha??user Verlag.

  14. Variation of the scaling characteristics of temporal and spatial distribution of earthquakes in Caucasus

    NASA Astrophysics Data System (ADS)

    Matcharashvili, T.; Chelidze, T.; Javakhishvili, Z.; Zhukova, N.

    2016-05-01

    In the present study we investigated the character of variation of long-range correlations features in earthquakes' temporal and spatial distribution in Caucasus. Scaling exponents of data sets of interearthquakes time intervals (waiting times) and interearthquakes distances were calculated by method of Detrended Fluctuation Analysis (DFA). Scaling exponent values were calculated for time windows with consecutive 500 data as well as for 5 year long sliding windows. It was shown that scaling exponents calculated for different windows vary in a wide range indicating variable behavior from antipersistent to persistent type. In the overwhelming majority of cases scaling exponents manifest persistent behavior both in the earthquakes time and spatial distributions. Close to 0.5 and antipersistent scaling exponents were obtained for the time periods when the strongest regional earthquakes occurred. We observed slow trend in long-range correlation features variation for the considered time period.

  15. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  16. Large LOCA-earthquake event combination probability assessment - Load Combination Program Project I summary report

    SciTech Connect

    Lu, S.; Streit, R.D.; Chou, C.K.

    1980-12-10

    This report summarizes work performed to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nucelar power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10/sup -12/). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture.

  17. Typical Scenario of Preparation, Implementation, and Aftershock Sequence of a Large Earthquake

    NASA Astrophysics Data System (ADS)

    Rodkin, Mikhail

    2016-04-01

    We have tried here to construct and examine the typical scenario of a large earthquake occurrence. The Harvard seismic moment GCMT catalog was used to construct the large earthquake generalized space-time vicinity (LEGV) and to investigate the seismicity behavior in LEGV. LEGV was composed of earthquakes falling into the zone of influence of any of the considerable number (100, 300, or 1,000) of largest earthquakes. The LEGV construction is aimed to enlarge the available statistics, diminish a strong random component, and to reveal in result the typical features of pre- and post-shock seismic activity in more detail. In result of the LEGV construction the character of fore- and aftershock cascades was examined in more detail than it was possible without of the use of the LEGV approach. It was shown also that the mean earthquake magnitude tends to increase, and the b-values, mean mb/mw ratios, apparent stress values, and mean depth tend to decrease. Amplitudes of all these anomalies increase with an approach to a moment of the generalized large earthquake (GLE) as a logarithm of time interval from GLE occurrence. Most of the discussed anomalies agree well with a common scenario of development of instability. Besides of such precursors of common character, one earthquake-specific precursor was found. The revealed decrease of mean earthquake depth during large earthquake preparation testifies probably for the deep fluid involvement in the process. The revealed in LEGV typical features of development of shear instability agree well with results obtained in laboratory acoustic emission (AE) study. Majority of the revealed anomalies appear to have a secondary character and are connected mainly with an increase in a mean earthquake magnitude in LEGV. The mean magnitude increase was shown to be connected mainly with a decrease of a portion of moderate size events (Mw 5.0 - 5.5) in a closer GLE vicinity. We believe that this deficit of moderate size events hardly can be

  18. The energy-magnitude scaling law for M s ≤ 5.5 earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Jeen-Hwa

    2015-04-01

    The scaling law of seismic radiation energy, E s , versus surface-wave magnitude, M s , proposed by Gutenberg and Richter (1956) was originally based on earthquakes with M s > 5.5. In this review study, we examine if this law is valid for 0 < M s ≤ 5.5 from earthquakes occurring in different regions. A comparison of the data points of log( E s ) versus M s with Gutenberg and Richter's law leads to a conclusion that the law is still valid for earthquakes with 0 < M s ≤ 5.5.

  19. Systematic Underestimation of Earthquake Magnitudes from Large Intracontinental Reverse Faults: Historical Ruptures Break Across Segment Boundaries

    NASA Technical Reports Server (NTRS)

    Rubin, C. M.

    1996-01-01

    Because most large-magnitude earthquakes along reverse faults have such irregular and complicated rupture patterns, reverse-fault segments defined on the basis of geometry alone may not be very useful for estimating sizes of future seismic sources. Most modern large ruptures of historical earthquakes generated by intracontinental reverse faults have involved geometrically complex rupture patterns. Ruptures across surficial discontinuities and complexities such as stepovers and cross-faults are common. Specifically, segment boundaries defined on the basis of discontinuities in surficial fault traces, pronounced changes in the geomorphology along strike, or the intersection of active faults commonly have not proven to be major impediments to rupture. Assuming that the seismic rupture will initiate and terminate at adjacent major geometric irregularities will commonly lead to underestimation of magnitudes of future large earthquakes.

  20. W phase source inversion using high-rate regional GPS data for large earthquakes

    NASA Astrophysics Data System (ADS)

    Riquelme, S.; Bravo, F.; Melgar, D.; Benavente, R.; Geng, J.; Barrientos, S.; Campos, J.

    2016-04-01

    W phase moment tensor inversion has proven to be a reliable method for rapid characterization of large earthquakes. For global purposes it is used at the United States Geological Survey, Pacific Tsunami Warning Center, and Institut de Physique du Globe de Strasbourg. These implementations provide moment tensors within 30-60 min after the origin time of moderate and large worldwide earthquakes. Currently, the method relies on broadband seismometers, which clip in the near field. To ameliorate this, we extend the algorithm to regional records from high-rate GPS data and retrospectively apply it to six large earthquakes that occurred in the past 5 years in areas with relatively dense station coverage. These events show that the solutions could potentially be available 4-5 min from origin time. Continuously improving GPS station availability and real-time positioning solutions will provide significant enhancements to the algorithm.

  1. Post-earthquake analysis and data correlation for the 1/4-scale containment model of the Lotung experiment

    SciTech Connect

    Tseng, W.S.; Lihanand, K.; Ostadan, F.; Tuann, S.Y. )

    1991-10-01

    This report presents the results of post-prediction earthquake response data analyses performed to identify the test system parameters for the 1/4-scale containment model of the Large-Scale Seismic Test (LSST) in Lotung, Taiwan and the results of post- prediction analytical earthquake parametric studies conducted to evaluate the applicabilities of four soil-structure interaction (SSI) analysis methods which have frequently been applied in the US nuclear industry. These four methods evaluated were: (1) the soil-spring method; (2) the CLASSI continuum halfspace substructuring method; (3) the SASSI finite element substructuring method; and (4) the FLUSH finite element direct method. Earthquake response data recorded on the containment and internal structure (steam generator and piping) for four earthquake events (LSST06, LSST07, LSST12, and LSST16) having peak ground accelerations ranging from 0.04 g to 0.21 g have been analyzed. The containment SSI system and the internal structure system frequencies and associated modal damping ratios consistent with ground shaking intensity of each event were identified. These results along with the site soil parameters identified from separate free-field soil response data analyses were used as the basis for refining the blind-prediction SSI analysis models for each of the four analysis methods evaluated. 12 refs., 5 figs.

  2. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  3. Large Subduction Earthquakes along the fossil MOHO in Alpine Corsica: what was the role of fluids?

    NASA Astrophysics Data System (ADS)

    Andersen, Torgeir B.; Deseta, Natalie; Silkoset, Petter; Austrheim, Håkon; Ashwal, Lewis D.

    2014-05-01

    Intermediate depth subduction earthquakes abruptly release vast amounts of energy to crust and mantle lithosphere. The products of such drastic deformation events can only rarely be observed in the field because they are mostly permanently lost by the subduction. We present new observations of deformation products formed by large fossil subduction earthquakes in Alpine Corsica. These are formed by a few very large and numerous small intermediate-depth earthquakes along the exhumed palaeo-Moho in the Alpine Liguro-Piemontese basin, which together with the 'schistes-lustrés complex' experienced blueschist- to lawsonite-eclogite facies metamorphism during the Alpine subduction. The abrupt release of energy resulted in localized shear heating that completely melted both gabbro and peridotite along the Moho. The large volumes of melts that were generated by at most a few very large earthquakes along the Moho can be studied in the fault- and injection vein breccia complex that is preserved in a segment along the Moho fault. The energy required for wholesale melting of a large volume of peridotite pr. m2 fault plane, combined with estimates of stress-drops show that a few large earthquakes took place along the Moho of the subducting plate. Since these fault rocks represent intra-plate seismicity we suggest they formed along the lower seismogenic zone by analogy with present-day subduction. As demonstrated in previous work (detailed petrography and EBSD) by our research team, there is no evidence for prograde dehydration reactions leading up to the co-seismic slip events. Instead we show that local crystal-plastic deformation in olivine and shear heating was more significant for the run-away co-seismic failure than a solid-state dehydration reaction weakening. We therefore disregard dehydration embrittlement as a weakening mechanism for these events, and suggest that shear heating may be the most important weakening mechanism for intermediate depth earthquakes.

  4. Coseismic and postseismic wave velocity changes caused by large crustal earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Wegler, Ulrich; Shiomi, Katsuhiko; Nakahara, Hisashi

    2014-05-01

    Using Passive Image Interferometry (PII), we analyzed coseismic and postseismic changes of seismic wave velocities caused by the following earthquakes which occurred in Japan between 2004 and 2011: The 2005 Fukuoka (MW6.6), 2007 Noto Hant¯o (MW6.6) and 2008 Iwate-Miyagi Nairiku (MW6.9) earthquakes, three earthquakes in Niigata Prefecture (2004 Mid-Niigata, MW6.8; 2007 Ch¯u etsu Offshore, MW6.6; 2011 Nagano/Niigata, MW6.2), as well as the 2011 Tohoku earthquake (MW9.0) in the four regions of the other earthquakes. The time series of ambient noise used for the different earthquakes spanned from at least half a year before the respective earthquake until three months after the Tohoku earthquake. Cross-correlations and single-station cross-correlations of several years of ambient seismic noise, which was recorded mainly by Hi-net sensors in the surrounding areas of the respective earthquakes, are calculated in different frequency ranges between 0.125 and 4.0 Hz. Between 10 and 20 seismometers were used in the different areas. The cross-correlations are calculated for all possible station pairs. Using a simple tomography algorithm, the resulting velocity variations can be reprojected on the actual station locations. The cross-correlation and single-station cross-correlation techniques give compatible results, the former giving more reliable results for frequencies below 0.5 Hz, the latter for higher frequencies. Our analysis yields significant coseismic velocity drops for all analyzed earthquakes, which are strongest close to the fault zones and exceed 1 % for some stations. The coseismic velocity drops are larger at higher frequencies and recover on a time scale of several years, but the coseismic velocity drops do not completely recover during our observation time. Velocity drops are also visible in all areas at the time of the Tohoku earthquake. Furthermore, we measured seasonal velocity variations of the order of 0.1 % in all areas which are, at least for

  5. Introduction and Overview: Counseling Psychologists' Roles, Training, and Research Contributions to Large-Scale Disasters

    ERIC Educational Resources Information Center

    Jacobs, Sue C.; Leach, Mark M.; Gerstein, Lawrence H.

    2011-01-01

    Counseling psychologists have responded to many disasters, including the Haiti earthquake, the 2001 terrorist attacks in the United States, and Hurricane Katrina. However, as a profession, their responses have been localized and nonsystematic. In this first of four articles in this contribution, "Counseling Psychology and Large-Scale Disasters,…

  6. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  7. Observations of large earthquakes in the Mexican subduction zone over 110 years

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, Vala; Krishna Singh, Shri; Martínez-Peláez, Liliana; Garza-Girón, Ricardo; Lund, Björn; Ji, Chen

    2016-04-01

    Fault slip during an earthquake is observed to be highly heterogeneous, with areas of large slip interspersed with areas of smaller or even no slip. The cause of the heterogeneity is debated. One hypothesis is that the frictional properties on the fault are heterogeneous. The parts of the rupture surface that have large slip during earthquakes are coupled more strongly, whereas the areas in between and around creep continuously or episodically. The continuously or episodically creeping areas can partly release strain energy through aseismic slip during the interseismic period, resulting in relatively lower prestress than on the coupled areas. This would lead to subsequent earthquakes having large slip in the same place, or persistent asperities. A second hypothesis is that in the absence of creeping sections, the prestress is governed mainly by the accumulative stress change associated with previous earthquakes. Assuming homogeneous frictional properties on the fault, a larger prestress results in larger slip, i.e. the next earthquake may have large slip where there was little or no slip in the previous earthquake, which translates to non-persistent asperities. The study of earthquake cycles are hampered by short time period for which high quality, broadband seismological and accelerographic records, needed for detailed studies of slip distributions, are available. The earthquake cycle in the Mexican subduction zone is relatively short, with about 30 years between large events in many places. We are therefore entering a period for which we have good records for two subsequent events occurring in the same segment of the subduction zone. In this study we compare seismograms recorded either at the Wiechert seismograph or on a modern broadband seismometer located in Uppsala, Sweden for subsequent earthquakes in the Mexican subduction zone rupturing the same patch. The Wiechert seismograph is unique in the sense that it recorded continuously for more than 80 years

  8. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  9. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  10. Precursory measure of interoccurrence time associated with large earthquakes in the Burridge-Knopoff model

    SciTech Connect

    Hasumi, Tomohiro

    2008-11-13

    We studied the statistical properties of interoccurrence time i.e., time intervals between successive earthquakes in the two-dimensional (2D) Burridge-Knopoff (BK) model, and have found that these statistics can be classified into three types: the subcritical state, the critical state, and the supercritical state. The survivor function of interoccurrence time is well fitted by the Zipf-Mandelbrot type power law in the subcritical regime. However, the fitting accuracy of this distribution tends to be worse as the system changes from the subcritical state to the supercritical state. Because the critical phase of a fault system in nature changes from the subcritical state to the supercritical state prior to a forthcoming large earthquake, we suggest that the fitting accuracy of the survivor distribution can be another precursory measure associated with large earthquakes.

  11. Basin-scale transport of heat and fluid induced by earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Yuen; Wang, Lee-Ping; Manga, Michael; Wang, Chung-Ho; Chen, Chieh-Hung

    2013-08-01

    Large earthquakes are known to cause widespread changes in groundwater flow, yet their relation to subsurface transport is unknown. Here we report systematic changes in groundwater temperature after the 1999 Mw7.6 Chi-Chi earthquake in central Taiwan, documented by a dense network of monitoring wells over a large (17,000 km2) alluvial fan near the epicenter. Analysis of the data reveals a hitherto unknown system of earthquake-triggered basin-wide groundwater flow, which scavenges geothermal heat from depths, changing groundwater temperature across the basin. The newly identified earthquake-triggered groundwater flow may have significant implications on postseismic groundwater supply and quality, contaminant transport, underground repository safety, and hydrocarbon production.

  12. Relay chatter and operator response after a large earthquake: An improved PRA methodology with case studies

    SciTech Connect

    Budnitz, R.J.; Lambert, H.E.; Hill, E.E.

    1987-08-01

    The purpose of this project has been to develop and demonstrate improvements in the PRA methodology used for analyzing earthquake-induced accidents at nuclear power reactors. Specifically, the project addresses methodological weaknesses in the PRA systems analysis used for studying post-earthquake relay chatter and for quantifying human response under high stress. An improved PRA methodology for relay-chatter analysis is developed, and its use is demonstrated through analysis of the Zion-1 and LaSalle-2 reactors as case studies. This demonstration analysis is intended to show that the methodology can be applied in actual cases, and the numerical values of core-damage frequency are not realistic. The analysis relies on SSMRP-based methodologies and data bases. For both Zion-1 and LaSalle-2, assuming that loss of offsite power (LOSP) occurs after a large earthquake and that there are no operator recovery actions, the analysis finds very many combinations (Boolean minimal cut sets) involving chatter of three or four relays and/or pressure switch contacts. The analysis finds that the number of min-cut-set combinations is so large that there is a very high likelihood (of the order of unity) that at least one combination will occur after earthquake-caused LOSP. This conclusion depends in detail on the fragility curves and response assumptions used for chatter. Core-damage frequencies are calculated, but they are probably pessimistic because assuming zero credit for operator recovery is pessimistic. The project has also developed an improved PRA methodology for quantifying operator error under high-stress conditions such as after a large earthquake. Single-operator and multiple-operator error rates are developed, and a case study involving an 8-step procedure (establishing feed-and-bleed in a PWR after an earthquake-initiated accident) is used to demonstrate the methodology.

  13. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  14. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  15. Complex Nucleation Process of Large North Chile Earthquakes, Implications for Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Ruiz, S.; Meneses, G.; Sobiesiak, M.; Madariaga, R. I.

    2014-12-01

    We studied the nucleation process of Northern Chile events that included the large earthquakes of Tocopilla 2007 Mw 7.8 and Iquique 2014 Mw 8.1, as well as the background seismicity recorded from 2011 to 2013 by the ILN temporary network and the IPOC and CSN permanent networks. We built our catalogue of 393 events starting from the CSN catalogue, which has a completeness of magnitude Mw > 3.0 in Northern Chile. We re-located and computed moment magnitude for each event. We also computed Early Warning (EW) parameters - Pd, Pv, τc and IV2 - for each event including 13 earthquakes of Mw>6.0 that occurred between 2007-2012. We also included part of the seismicity from March-April 2014 period. We find that Pd, Pv and IV2 are good estimators of magnitude for interplate thrust and intraplate intermediate depth events with Mw between 4.0 and 6.0. However, the larger magnitude events show a saturation of the EW parameters. The Tocopilla 2007 and Iquique 2014 earthquake sequences were studied in detail. Almost all events with Mw>6.0 present precursory signals so that the largest amplitudes occur several seconds after the first P wave arrival. The recent Mw 8.1 Iquique 2014 earthquake was preceded by low amplitude P waves for 20 s before the main asperity was broken. The magnitude estimation can improve if we consider longer P wave windows in the estimation of EW parameters. There was, however, a practical limit during the Iquique earthquake because the first S waves arrived before the arrival of the P waves from the main rupture. The 4 s P-wave Pd parameter estimated Mw 7.1 for the Mw 8.1 Iquique 2014 earthquake and Mw 7.5 for the Mw 7.8 Tocopilla 2007 earthquake.

  16. Rapid Reoccurrence of Large Earthquakes due to Depth Segmentation of the Seismogenic Crust

    NASA Astrophysics Data System (ADS)

    Elliott, J. R.; Parsons, B. E.; Jackson, J. A.; Shan, X.; Sloan, R.; Walker, R. T.

    2010-12-01

    The Mw 6.3 November 2008 and Mw 6.3 August 2009 thrust-fault earthquakes occurred in almost the same location within the North Qaidam thrust system, south of the Qilian Shan/Nan Shan thrust belt and on the northern margin of the Qaidam basin, NE Tibet. This fold-and-thrust belt is the result of the ongoing northward convergence of India with Eurasia, with the rate of NE-SW convergence across it of approximately 10 mm/yr. We measured the coseismic displacements due to each earthquake by constructing radar interferograms using a combination of SAR ENVISAT acquisitions spanning each event separately. For each earthquake, we utilised two look directions on ascending and descending satellite passes, and derived fault and slip models using both look directions simultaneously. The models suggest that the two earthquakes occurred on a near coplanar fault that was segmented in depth, resulting in the arrested rupture of the initial deeper segment of the fault, and only allowing the failure of the upper portion of the crust ten months later. The depth at which the segmentation occurs is approximately coincident with the intersection of the down-dip projection of a range-bounding thrust fault. This suggests that where either an interacting fault geometry or lithological properties allow only part of the seismogenic layer to rupture, the occurrence of a large earthquake does not necessarily result in a reduction of the immediate seismic hazard. Such a geometry may have prevented the failure of the lower part of the seismogenic layer during the 2003 Bam earthquake (Jackson et al., 2006), representing a continuing seismic hazard despite the occurrence of the earthquake.

  17. Active structural growth in central Taiwan in relationship to large earthquakes and pore-fluid pressures

    NASA Astrophysics Data System (ADS)

    Yue, Li-Fan

    Central Taiwan is subject to a substantial long-term earthquake risk with a population of five million and two disastrous earthquakes in the last century, the 1935 ML=7.1 Tuntzuchiao and 1999 Mw=7.6 Chi-Chi earthquakes. Rich data from these earthquakes combined with substantial surface and subsurface data accumulated from petroleum exploration form the basis for these studies of the growth of structures in successive large earthquakes and their relationships to pore-fluid pressures. Chapter 1 documents the structural context of the bedding-parallel Chelungpu thrust that slipped in the Chi-Chi earthquake by showing for this richly instrumented earthquake the close geometric relationships between the complex 3D fault shape and the heterogeneous coseismic displacements constrained by geodesy and seismology. Chapter 2 studies the accumulation of deformation by successive large earthquakes by studying the deformation of flights of fluvial terraces deposited over the Chelungpu and adjacent Changhua thrusts, showing the deformation on a timescale of tens of thousands of years. Furthermore these two structures, involving the same stratigraphic sequence, show fundamentally different kinematics of deformation with associated contrasting hanging-wall structural geometries. The heights and shapes of deformed terraces allowed testing of existing theories of fault-related folding. Furthermore terrace dating constrains a combined shortening rate of 37 mm/yr, which is 45% of the total Taiwan plate-tectonic rate, and indicates a substantial earthquake risk for the Changhua thrust. Chapter 3 addresses the long-standing problem of the mechanics of long-thing thrust sheets, such as the Chelungpu and Changhua thrusts in western Taiwan, by presenting a natural test for the classic Hubbert-Rubey hypothesis, which argues that ambient excess pore-fluid pressure substantially reduces the effective fault friction allowing the thrusts to move. Pore-fluid pressure data obtained from 76 wells

  18. Stochastic modelling of a large subduction interface earthquake in Wellington, New Zealand

    NASA Astrophysics Data System (ADS)

    Francois-Holden, C.; Zhao, J.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. A potential cause of significant earthquake loss in the Wellington region is a large magnitude (perhaps 8+) "subduction earthquake" on the Australia-Pacific plate interface, which lies ~23 km beneath Wellington City. "It's Our Fault" is a project involving a comprehensive study of Wellington's earthquake risk. Its objective is to position Wellington city to become more resilient, through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. As part of the "It's Our Fault" project, we are working on estimating ground motions from potential large plate boundary earthquakes. We present the latest results on ground motion simulations in terms of response spectra and acceleration time histories. First we characterise the potential interface rupture area based on previous geodetically-derived estimates interface of slip deficit. Then, we entertain a suitable range of source parameters, including various rupture areas, moment magnitudes, stress drops, slip distributions and rupture propagation directions. Our comprehensive study also includes simulations from historical large world subduction events translated into the New Zealand subduction context, such as the 2003 M8.3 Tokachi-Oki Japan earthquake and the M8.8 2010 Chili earthquake. To model synthetic seismograms and the corresponding response spectra we employed the EXSIM code developed by Atkinson et al. (2009), with a regional attenuation model based on the 3D attenuation model for the lower North-Island which has been developed by Eberhart-Phillips et al. (2005). The resulting rupture scenarios all produce long duration shaking, and peak ground

  19. Earthquake.

    PubMed

    Cowen, A R; Denney, J P

    1994-04-01

    On January 25, 1 week after the most devastating earthquake in Los Angeles history, the Southern California Hospital Council released the following status report: 928 patients evacuated from damaged hospitals. 805 beds available (136 critical, 669 noncritical). 7,757 patients treated/released from EDs. 1,496 patients treated/admitted to hospitals. 61 dead. 9,309 casualties. Where do we go from here? We are still waiting for the "big one." We'll do our best to be ready when Mother Nature shakes, rattles and rolls. The efforts of Los Angeles City Fire Chief Donald O. Manning cannot be overstated. He maintained department command of this major disaster and is directly responsible for implementing the fire department's Disaster Preparedness Division in 1987. Through the chief's leadership and ability to forecast consequences, the city of Los Angeles was better prepared than ever to cope with this horrendous earthquake. We also pay tribute to the men and women who are out there each day, where "the rubber meets the road." PMID:10133439

  20. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  1. One-Way Markov Process Approach to Repeat Times of Large Earthquakes in Faults

    NASA Astrophysics Data System (ADS)

    Tejedor, Alejandro; Gomez, Javier B.; Pacheco, Amalio F.

    2012-11-01

    One of the uses of Markov Chains is the simulation of the seismic cycle in a fault, i.e. as a renewal model for the repetition of its characteristic earthquakes. This representation is consistent with Reid's elastic rebound theory. We propose a general one-way Markovian model in which the waiting time distribution, its first moments, coefficient of variation, and functions of error and alarm (related to the predictability of the model) can be obtained analytically. The fact that in any one-way Markov cycle the coefficient of variation of the corresponding distribution of cycle lengths is always lower than one concurs with observations of large earthquakes in seismic faults. The waiting time distribution of one of the limits of this model is the negative binomial distribution; as an application, we use it to fit the Parkfield earthquake series in the San Andreas fault, California.

  2. Detection of large prehistoric earthquakes in the pacific northwest by microfossil analysis.

    PubMed

    Mathewes, R W; Clague, J J

    1994-04-29

    Geologic and palynological evidence for rapid sea level change approximately 3400 and approximately 2000 carbon-14 years ago (3600 and 1900 calendar years ago) has been found at sites up to 110 kilometers apart in southwestern British Columbia. Submergence on southern Vancouver Island and slight emergence on the mainland during the older event are consistent with a great (magnitude M >/= 8) earthquake on the Cascadia subduction zone. The younger event is characterized by submergence throughout the region and may also record a plate-boundary earthquake or a very large crustal or intraplate earthquake. Microfossil analysis can detect small amounts of coseismic uplift and subsidence that leave little or no lithostratigraphic signature. PMID:17737954

  3. Large Historical Earthquakes and Tsunami Hazards in the Western Mediterranean: Source Characteristics and Modelling

    NASA Astrophysics Data System (ADS)

    Harbi, Assia; Meghraoui, Mustapha; Belabbes, Samir; Maouche, Said

    2010-05-01

    The western Mediterranean region was the site of numerous large earthquakes in the past. Most of these earthquakes are located at the East-West trending Africa-Eurasia plate boundary and along the coastline of North Africa. The most recent recorded tsunamigenic earthquake occurred in 2003 at Zemmouri-Boumerdes (Mw 6.8) and generated ~ 2-m-high tsunami wave. The destructive wave affected the Balearic Islands and Almeria in southern Spain and Carloforte in southern Sardinia (Italy). The earthquake provided a unique opportunity to gather instrumental records of seismic waves and tide gauges in the western Mediterranean. A database that includes a historical catalogue of main events, seismic sources and related fault parameters was prepared in order to assess the tsunami hazard of this region. In addition to the analysis of the 2003 records, we study the 1790 Oran and 1856 Jijel historical tsunamigenic earthquakes (Io = IX and X, respectively) that provide detailed observations on the heights and extension of past tsunamis and damage in coastal zones. We performed the modelling of wave propagation using NAMI-DANCE code and tested different fault sources from synthetic tide gauges. We observe that the characteristics of seismic sources control the size and directivity of tsunami wave propagation on both northern and southern coasts of the western Mediterranean.

  4. Particle precipitation prior to large earthquakes of both the Sumatra and Philippine Regions: A statistical analysis

    NASA Astrophysics Data System (ADS)

    Fidani, Cristiano

    2015-12-01

    A study of statistical correlation between low L-shell electrons precipitating into the atmosphere and strong earthquakes is presented. More than 11 years of the Medium Energy Protons Electrons Detector data from the NOAA-15 Sun-synchronous polar orbiting satellite were analysed. Electron fluxes were analysed using a set of adiabatic coordinates. From this, significant electron counting rate fluctuations were evidenced during geomagnetic quiet periods. Electron counting rates were compared to earthquakes by defining a seismic event L-shell obtained radially projecting the epicentre geographical positions to a given altitude towards the zenith. Counting rates were grouped in every satellite semi-orbit together with strong seismic events and these were chosen with the L-shell coordinates close to each other. NOAA-15 electron data from July 1998 to December 2011 were compared for nearly 1800 earthquakes with magnitudes larger than or equal to 6, occurring worldwide. When considering 30-100 keV precipitating electrons detected by the vertical NOAA-15 telescope and earthquake epicentre projections at altitudes greater that 1300 km, a significant correlation appeared where a 2-3 h electron precipitation was detected prior to large events in the Sumatra and Philippine Regions. This was in physical agreement with different correlation times obtained from past studies that considered particles with greater energies. The Discussion below of satellite orbits and detectors is useful for future satellite missions for earthquake mitigation.

  5. Basin-scale transport of heat and fluid induced by earthquakes

    NASA Astrophysics Data System (ADS)

    Wang, C.; Manga, M.; Wang, L.; Chen, C.

    2013-12-01

    Large earthquakes are known to cause widespread changes in groundwater flow at distances thousands of kilometers away from the epicenter, yet their relation to subsurface transport is unknown. Since groundwater flow is effective in transporting subsurface heat, studies of earthquake-induced changes in groundwater temperature may be useful for better understanding earthquake-induced heat transport. Here we report systematic changes in groundwater temperature after the 1999 Mw 7.6 Chi-Chi earthquake in central Taiwan, recorded by a dense network of monitoring wells over a large (1,800 km2) alluvial fan near the epicenter. The data documented a clear trend of increase from negative changes (temperature decrease) near the upper rim of the fan near the ruptured fault to positive changes (temperature increase) near the coast. Analysis of the data reveals a hitherto unknown system of earthquake-triggered basin-wide groundwater flow, which scavenges geothermal heat from depths, changing groundwater temperature across the basin. The newly identified earthquake-triggered groundwater flow may have significant implications on post-seismic groundwater supply and quality, contaminant transport, underground repository safety, and hydrocarbon production.

  6. Evidence for earthquake triggering of large landslides in coastal Oregon, USA

    USGS Publications Warehouse

    Schulz, W.H.; Galloway, S.L.; Higgins, J.D.

    2012-01-01

    Landslides are ubiquitous along the Oregon coast. Many are large, deep slides in sedimentary rock and are dormant or active only during the rainy season. Morphology, observed movement rates, and total movement suggest that many are at least several hundreds of years old. The offshore Cascadia subduction zone produces great earthquakes every 300–500 years that generate tsunami that inundate the coast within minutes. Many slides and slide-prone areas underlie tsunami evacuation and emergency response routes. We evaluated the likelihood of existing and future large rockslides being triggered by pore-water pressure increase or earthquake-induced ground motion using field observations and modeling of three typical slides. Monitoring for 2–9 years indicated that the rockslides reactivate when pore pressures exceed readily identifiable levels. Measurements of total movement and observed movement rates suggest that two of the rockslides are 296–336 years old (the third could not be dated). The most recent great Cascadia earthquake was M 9.0 and occurred during January 1700, while regional climatological conditions have been stable for at least the past 600 years. Hence, the estimated ages of the slides support earthquake ground motion as their triggering mechanism. Limit-equilibrium slope-stability modeling suggests that increased pore-water pressures could not trigger formation of the observed slides, even when accompanied by progressive strength loss. Modeling suggests that ground accelerations comparable to those recorded at geologically similar sites during the M 9.0, 11 March 2011 Japan Trench subduction-zone earthquake would trigger formation of the rockslides. Displacement modeling following the Newmark approach suggests that the rockslides would move only centimeters upon coseismic formation; however, coseismic reactivation of existing rockslides would involve meters of displacement. Our findings provide better understanding of the dynamic coastal bluff

  7. “PLAFKER RULE OF THUMB” RELOADED: EXPERIMENTAL INSIGHTS INTO THE SCALING AND VARIABILITY OF LOCAL TSUNAMIS TRIGGERED BY GREAT SUBDUCTION MEGATHRUST EARTHQUAKES

    NASA Astrophysics Data System (ADS)

    Rosenau, M.; Nerlich, R.; Brune, S.; Oncken, O.

    2009-12-01

    along accretionary margins. Three out of the top-five tsunami hotspots we identify had giant earthquakes in the last decades (Chile 1960, Alaska 1964, Sumatra-Andaman 2004) and one (Sumatra-Mentawai) started in 2005 releasing strain in a possibly moderate mode of sequential large earthquakes. This leaves Cascadia as the major active tsunami hotspot in the focus of tsunami hazard assessment. Visualization of preliminary versions of the experimentally-derived scaling laws for peak nearshore tsunami heigth (PNTH) as functions of forearc slope, peak earthquake slip (left panel) and moment magnitude (right panel). Note that wave breaking is not considered yet. This renders the extrem peaks > 20 m unrealistic.

  8. Long-period ocean-bottom motions in the source areas of large subduction earthquakes.

    PubMed

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10-20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  9. Long-period ocean-bottom motions in the source areas of large subduction earthquakes

    PubMed Central

    Nakamura, Takeshi; Takenaka, Hiroshi; Okamoto, Taro; Ohori, Michihiro; Tsuboi, Seiji

    2015-01-01

    Long-period ground motions in plain and basin areas on land can cause large-scale, severe damage to structures and buildings and have been widely investigated for disaster prevention and mitigation. However, such motions in ocean-bottom areas are poorly studied because of their relative insignificance in uninhabited areas and the lack of ocean-bottom strong-motion data. Here, we report on evidence for the development of long-period (10–20 s) motions using deep ocean-bottom data. The waveforms and spectrograms demonstrate prolonged and amplified motions that are inconsistent with attenuation patterns of ground motions on land. Simulated waveforms reproducing observed ocean-bottom data demonstrate substantial contributions of thick low-velocity sediment layers to development of these motions. This development, which could affect magnitude estimates and finite fault slip modelling because of its critical period ranges on their estimations, may be common in the source areas of subduction earthquakes where thick, low-velocity sediment layers are present. PMID:26617193

  10. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  11. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  12. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  13. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  14. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  15. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  16. How Large can Mexican Subduction Earthquakes be? Evidence of a Very Large Event in 1787 (M~8.5)

    NASA Astrophysics Data System (ADS)

    Suarez, G.

    2007-05-01

    A sequence of very strong earthquakes occurred from 28 March to 18 April, 1787. The first earthquake on 28 March, appears to be the largest of the sequence followed by three strong events on 29 and 30 March, and 3 April; strong aftershocks continued to be reported until 18 April. The event of 28 March was strongly felt and caused damage in Mexico City, where several buildings were reported to suffer. The strongest effects, however, were observed on the southeastern coast of Guerrero and Oaxaca. Intensities greater than 8 (MMI) were observed along the coast over a distance of about 400 km. The towns of Ometepec, Jamiltepec and Tehuantepec reported strong damage to local churches and other apparently well-constructed buildings. In contrast to the low intensities observed during the coastal Oaxaca earthquakes of 1965, 1968 and 1978, Oaxaca City reports damage equivalent to intensity 8 to 9 on 28 March, 1787. An unusual effect of this earthquake on the Mexican subduction zone was the presence of a very large tsunami. Three different sources report that in the area known as the Barra de Alotengo (16.2N, 98.2 W) the sea retreated for a distance of about one Spanish league (4.1 km). A large wave came back and invaded land for approximately 1.5 leagues (6.2 km). Several local ranchers were swept away by the coming wave. Along the coast near the town of Tehuantepec, about 400 km to the southeast of Alotengo a tsunami was also reported to have stranded fish and shellfish inland; in this case no description of the distance penetrated by the tsunami is reported. It is also described that in Acapulco, some 200 km to the northwest of Alotengo, a strong wave was observed and that the sea remained agitated for a whole day. Assumming that the subduction zone ruptured from somewhere near Alotengo to the coast Tehuantepec, the resulting fault lenght is about 400 to 450 km. This large fault rupture contrasts with the seismic cycle of the Oaxaca coast observed during this century where

  17. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide

    USGS Publications Warehouse

    Pollitz, Fred F.; Stein, Ross S.; Sevilgen, Volkan; Burgmann, Roland

    2012-01-01

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days1, 2, 3, 4, 5, 6, 7, 8, 9, 10, but so far remote aftershocks of moment magnitude M≥5.5 have not been identified11, with the lone exception of an M=6.9 quake remotely triggered by the surface waves from an M=6.6 quake 4,800 kilometres away12. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M≥5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M≥7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10-7 for at least 100 seconds during dynamic-wave passage. The other M≥8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M≥5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure.

  18. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  19. The 11 April 2012 east Indian Ocean earthquake triggered large aftershocks worldwide.

    PubMed

    Pollitz, Fred F; Stein, Ross S; Sevilgen, Volkan; Bürgmann, Roland

    2012-10-11

    Large earthquakes trigger very small earthquakes globally during passage of the seismic waves and during the following several hours to days, but so far remote aftershocks of moment magnitude M ≥ 5.5 have not been identified, with the lone exception of an M = 6.9 quake remotely triggered by the surface waves from an M = 6.6 quake 4,800 kilometres away. The 2012 east Indian Ocean earthquake that had a moment magnitude of 8.6 is the largest strike-slip event ever recorded. Here we show that the rate of occurrence of remote M ≥ 5.5 earthquakes (>1,500 kilometres from the epicentre) increased nearly fivefold for six days after the 2012 event, and extended in magnitude to M ≤ 7. These global aftershocks were located along the four lobes of Love-wave radiation; all struck where the dynamic shear strain is calculated to exceed 10(-7) for at least 100 seconds during dynamic-wave passage. The other M ≥ 8.5 mainshocks during the past decade are thrusts; after these events, the global rate of occurrence of remote M ≥ 5.5 events increased by about one-third the rate following the 2012 shock and lasted for only two days, a weaker but possibly real increase. We suggest that the unprecedented delayed triggering power of the 2012 earthquake may have arisen because of its strike-slip source geometry or because the event struck at a time of an unusually low global earthquake rate, perhaps increasing the number of nucleation sites that were very close to failure. PMID:23023131

  20. Spectral Decay Characteristics in High Frequency Range of Observed Records from Crustal Large Earthquakes (Part 2)

    NASA Astrophysics Data System (ADS)

    Tsurugi, M.; Kagawa, T.; Irikura, K.

    2012-12-01

    Spectral decay characteristics in high frequency range of observed records from crustal large earthquakes occurred in Japan is examined. It is very important to make spectral decay characteristics clear in high frequency range for strong ground motion prediction in engineering purpose. The authors examined spectral decay characteristics in high frequency range of observed records among three events, the 2003 Miyagi-Ken Hokubu earthquake (Mw 6.1), the 2005 Fukuoka-Ken Seiho-oki earthquake (Mw 6.6), and the 2008 Iwate-Miyagi Nairiku earthquake (Mw 6.9) in previous study [Tsurugi et al.(2010)]. Target earthquakes in this study are two events shown below. *EQ No.1 Origin time: 2011/04/11 17:16, Location of hypocenter: East of Fukushima pref., Mj: 7.0, Mw: 6.6, Fault type: Normal fault *EQ No.2 Origin time: 2011/03/15 22:31, Location of hypocenter: East of Shizuoka pref., Mj: 6.4, Mw: 5.9, Fault type: Strike slip fault The borehole data of each event are used in the analysis. The Butterworth type high-cut filter with cut-off frequency, fmax and its power coefficient of high-frequency decay, s [Boore(1983)], are assumed to express the high-cut frequency characteristics of ground motions. The four parameters such as seismic moment, corner frequency, cut-off frequency and its power coefficient of high-frequency decay are estimated by comparing observed spectra at rock sites with theoretical spectra. The theoretical spectra are calculated based on the omega squared source characteristics convolved with propagation-path effects and high-cut filter shapes. In result, the fmax's of the records from the earthquakes are estimated 8.0Hz for EQ No.1 and 8.5Hz for EQ No.2. These values are almost same with those of other large crustal earthquakes occurred in Japan. The power coefficient, s, are estimated 0.78 for EQ No.1 and 1.65 for EQ No.2. The value for EQ No.2 is notably larger than those of other large crustal earthquakes. It is seems that the value of the power coefficient, s

  1. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  2. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  3. Bibliographical search for reliable seismic moments of large earthquakes during 1900-1979 to compute MW in the ISC-GEM Global Instrumental Reference Earthquake Catalogue

    NASA Astrophysics Data System (ADS)

    Lee, William H. K.; Engdahl, E. Robert

    2015-02-01

    Moment magnitude (MW) determinations from the online GCMT Catalogue of seismic moment tensor solutions (GCMT Catalog, 2011) have provided the bulk of MW values in the ISC-GEM Global Instrumental Reference Earthquake Catalogue (1900-2009) for almost all moderate-to-large earthquakes occurring after 1975. This paper describes an effort to determine MW of large earthquakes that occurred prior to the start of the digital seismograph era, based on credible assessments of thousands of seismic moment (M0) values published in the scientific literature by hundreds of individual authors. MW computed from the published M0 values (for a time period more than twice that of the digital era) are preferable to proxy MW values, especially for earthquakes with MW greater than about 8.5, for which MS is known to be underestimated or "saturated". After examining 1,123 papers, we compile a database of seismic moments and related information for 1,003 earthquakes with published M0 values, of which 967 were included in the ISC-GEM Catalogue. The remaining 36 earthquakes were not included in the Catalogue due to difficulties in their relocation because of inadequate arrival time information. However, 5 of these earthquakes with bibliographic M0 (and thus MW) are included in the Catalogue's Appendix. A search for reliable seismic moments was not successful for earthquakes prior to 1904. For each of the 967 earthquakes a "preferred" seismic moment value (if there is more than one) was selected and its uncertainty was estimated according to the data and method used. We used the IASPEI formula (IASPEI, 2005) to compute direct moment magnitudes (MW[M0]) based on the seismic moments (M0), and assigned their errors based on the uncertainties of M0. From 1900 to 1979, there are 129 great or near great earthquakes (MW ⩾ 7.75) - the bibliographic search provided direct MW values for 86 of these events (or 67%), the GCMT Catalog provided direct MW values for 8 events (or 6%), and the remaining 35

  4. Scaling of Seismic Moment with Recurrence Interval for Small Repeating Earthquakes Simulated on Rate-and-State Faults

    NASA Astrophysics Data System (ADS)

    Chen, T.; Lapusta, N.

    2006-12-01

    Observations suggest that the recurrence time T and seismic moment M0 of small repeating earthquakes in Parkfield scale as T∝ M_0^{0.17 (Nadeau and Johnson, 1998). However, a simple conceptual model of these earthquakes as circular ruptures with stress drop independent of the seismic moment and slip that is proportional to the recurrence time T results in T∝ M_0^{1/3}. Several explanations for this discrepancy have been proposed. Nadeau and Johnson (1998) suggested that stress drop depends on the seismic moment and is much higher for small events than typical estimates based on seismic spectra. Sammis and Rice (2001) modeled repeating earthquakes at a border between large locked and creeping patches to get T∝ M_0^{1/6} and reasonable stress drops. Beeler et al. (2001) considered a fixed-area patch governed by a conceptual law that incorporated strain-hardening and showed that aseismic slip on the patch can explain the observed scaling relation. In this study, we provide an alternative physical basis, grounded in laboratory-derived rate and state friction laws, for the idea of Beeler at el. (2001) that much of the overall slip at the places of small repeating earthquakes may be accumulated aseismically. We simulate repeating events in a 3D model of a strike-slip fault imbedded into an elastic space and governed by rate and state friction laws. The fault has a small circular patch (2-20 m in diameter) with steady-state rate-weakening properties, with the rest of the fault governed by steady-state rate strengthening. The simulated fault segment is 40 m by 40 m, with periodic boundary conditions. We use values of rate and state parameters typical of laboratory experiments, with characteristic slip of order several microns. The model incorporates tectonic-like loading equivalent to the plate rate of 23 mm/year and all dynamic effects during unstable sliding. Our simulations use the 3D methodology of Liu and Lapusta (AGU, 2005) and fully resolve all aspects of

  5. Finding the Shadows: Local Variations in the Stress Field due to Large Magnitude Earthquakes

    NASA Astrophysics Data System (ADS)

    Latimer, C.; Tiampo, K.; Rundle, J.

    2009-05-01

    Stress shadows, regions of static stress decrease associated with large magnitude earthquake have typically been described through several characteristics or parameters such as location, duration, and size. These features can provide information about the physics of the earthquake itself, as static stress changes are dependent on the following parameters: the regional stress orientations, the coefficient of friction, as well as the depth of interest (King et al, 1994). Areas of stress decrease, associated with a decrease in the seismicity rate, while potentially stable in nature, have been difficult to identify in regions of high rates of background seismicity (Felzer and Brodsky, 2005; Hardebeck et al., 1998). In order to obtain information about these stress shadows, we can determine their characteristics by using the Pattern Informatics (PI) method (Tiampo et al., 2002; Tiampo et al., 2006). The PI method is an objective measure of seismicity rate changes that can be used to locate areas of increases and/or decreases relative to the regional background rate. The latter defines the stress shadows for the earthquake of interest, as seismicity rate changes and stress changes are related (Dieterich et al., 1992; Tiampo et al., 2006). Using the data from the PI method, we can invert for the parameters of the modeled half-space using a genetic algorithm inversion technique. Stress changes will be calculated using coulomb stress change theory (King et al., 1994) and the Coulomb 3 program is used as the forward model (Lin and Stein, 2004; Toda et al., 2005). Changes in the regional stress orientation (using PI results from before and after the earthquake) are of the greatest interest as it is the main factor controlling the pattern of the coulomb stress changes resulting from any given earthquake. Changes in the orientation can lead to conclusions about the local stress field around the earthquake and fault. The depth of interest and the coefficient of friction both

  6. The Scaling of the Slip Weakening Distance (Dc) With Final Slip During Dynamic Earthquake Rupture

    NASA Astrophysics Data System (ADS)

    Tinti, E.; Fukuyama, E.; Cocco, M.; Piatanesi, A.

    2005-12-01

    Several numerical approaches have been recently proposed to retrieve the evolution of dynamic traction during the earthquake propagation on extended faults. Although many studies have shown that the shear traction evolution as a function of time and/or slip may be complex, they all reveal an evident dynamic weakening behavior during faulting. The main dynamic parameters describing traction evolution are: the yield stress, the residual kinetic stress level and the characteristic slip weakening distance Dc. Recent investigations on real data yield the estimate of large Dc values on the fault plane and a correlation between Dc and the final slip. In this study, we focus our attention on the characteristic slip weakening distance Dc and on its variability on the fault plane. Different physical mechanisms have been proposed to explain the origin of Dc, some of them consider this parameter as a scale dependent quantity. We have computed the rupture history from several spontaneous dynamic models imposing a slip weakening law with prescribed Dc distributions on the fault plane. These synthetic models provide the slip velocity evolution during the earthquake rupture. We have therefore generated a set of slip velocity models by fitting the "true" slip velocity time histories with an analytical source time function. To this goal we use the Yoffe function [Tinti et al. 2005], which is dynamically consistent and allows a flexible parameterization. We use these slip velocity histories as a boundary condition on the fault plane to compute the traction evolution. We estimate the Dc values from the traction versus slip curves. We therefore compare the inferred Dc values with those of the original dynamic models and we found that the Dc estimates are very sensitive to the adopted slip velocity function. Despite the problem of resolution that limits the estimate of Dc from kinematic earthquake models and the tradeoff that exists between Dc and strength excess, we show that to

  7. The north-northwest aftershock pattern of the June 28, 1992 Landers earthquake and the probability of large earthquakes in Indian Wells Valley

    SciTech Connect

    Roquemore, G.R. . Dept. of Geosciences); Simila, G.A. . Dept. of Geological Sciences)

    1993-04-01

    Immediately following the June 28, 1992 Landers earthquake, a strong north-northwest pattern of aftershocks and triggered earthquakes developed. The most intense pattern developed between the north end of primary rupture on the Emerson fault and southern Owens Valley. The trend of seismicity cuts through the east-west trending Garlock fault at a high angle. The Garlock fault has no apparent affect on the trend or pattern. Within the aftershock zone, south of the Garlock fault, the Calico and Blackwater faults provide the most likely pathway for the Mojave shear zone into Indian Wells and Owens Valleys. In Indian Wells Valley the seismically active Little Lake fault aligns well with the Blackwater fault to the south and the southern Owens Valley fault zone to the north. Several recent research papers suggest that Optimum Coulomb failure stress changes caused by the Landers earthquake have enhanced the probability of earthquakes within the north-northwest trending aftershock zone. This increase has greater significance when the presumed Optimum Coulomb failure stress changes caused by the 1872 Owens Valley earthquake and its affects on Indian Wells Valley are considered. Indian Wells Valley and the Coso Volcanic field may have received two significant stress increases from earthquakes of magnitude 7.5 or greater in the last 120 years. If these two earthquakes increased the shear stress of aults in the Indian Wells/Coso areas, the most likely site for the next large earthquake within the Mojave shear zone may be there. The rate of seismicity within Indian Wells Valley had increased since 1980 including a magnitude 5.0 earthquake in 1982.

  8. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  9. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  10. The large earthquake on 29 June 1170 (Syria, Lebanon, and central southern Turkey)

    NASA Astrophysics Data System (ADS)

    Guidoboni, Emanuela; Bernardini, Filippo; Comastri, Alberto; Boschi, Enzo

    2004-07-01

    On 29 June 1170 a large earthquake hit a vast area in the Near Eastern Mediterranean, comprising the present-day territories of western Syria, central southern Turkey, and Lebanon. Although this was one of the strongest seismic events ever to hit Syria, so far no in-depth or specific studies have been available. Furthermore, the seismological literature (from 1979 until 2000) only elaborated a partial summary of it, mainly based solely on Arabic sources. The major effects area was very partial, making the derived seismic parameters unreliable. This earthquake is in actual fact one of the most highly documented events of the medieval Mediterranean. This is due to both the particular historical period in which it had occurred (between the second and the third Crusades) and the presence of the Latin states in the territory of Syria. Some 50 historical sources, written in eight different languages, have been analyzed: Latin (major contributions), Arabic, Syriac, Armenian, Greek, Hebrew, Vulgar French, and Italian. A critical analysis of this extraordinary body of historical information has allowed us to obtain data on the effects of the earthquake at 29 locations, 16 of which were unknown in the previous scientific literature. As regards the seismic dynamics, this study has set itself the question of whether there was just one or more than one strong earthquake. In the former case, the parameters (Me 7.7 ± 0.22, epicenter, and fault length 126.2 km) were calculated. Some hypotheses are outlined concerning the seismogenic zones involved.